Andris🦉

1.2K posts

Andris🦉 banner
Andris🦉

Andris🦉

@andvolkovs

I am just posting stuff

Dublin City, Ireland Katılım Ekim 2011
473 Takip Edilen490 Takipçiler
Ryan Holiday
Ryan Holiday@RyanHoliday·
Lol. Rolling your eyes at the performative philosophy of a member of the most corrupt family in American history is apparently 'fuming'? Although if there was anything to be mad about these days, I think objecting to people lighting the world on fire so they can make a killing in prediction markets and kickbacks is probably it. yahoo.com/entertainment/…
English
886
73
919
381.8K
Andris🦉 retweetledi
Wise
Wise@trikcode·
I DIDN'T KNOW YOUR GAME I. APOLOGIZE. SINCERELY
Wise tweet media
English
100
314
12.1K
639.3K
Andris🦉
Andris🦉@andvolkovs·
"Exponential growth can't be grasped when it is right in front of you." -Pantheon
English
0
0
0
13
Andris🦉 retweetledi
Raoul Pal
Raoul Pal@RaoulGMI·
Forget UBI. The answer is Universal Basic Equity… and it’s humanity’s pension plan for the post-AGI world... The Economic Singularity is coming faster than people think and the default question is how humans make money in a world that doesn’t really need them anymore. The default answer is UBI, which is transfer payments from a state, funded by taxing an AI economy that nation states can neither see nor keep up with. It’s a 20th century answer to a 21st century problem and it’s broken before it even starts. Agents are becoming the dominant user of the internet, not humans. Your AI is becoming your entire front end UX. The clicks economy is dying everywhere except where humans pay to feel something - clothing, travel, luxury, experiences, culture. Agents run on crypto rails because nothing else works. The dollar doesn’t fractionalise below a cent, settlement isn’t instant, permissions are required, jurisdictions matter. Stablecoins handle the dollar leg and native tokens handle the rest. The biggest users of DeFi in five years won’t be humans farming yield… it’ll be agents managing treasuries, swapping, earning and spending at machine speed. Capital formation has already shown its new shape and it came from the most unexpected place. Memecoins. Everyone wrote them off as a casino but they were a prototype. Instant capital formation around the attention of an idea, raised by entities without legal personhood, settled in seconds. That is the template agent economies will use to fund themselves. And it’s not just agents... Robots will run on the same rails, with zk permissions issued from our wallets as the source of truth, because biometrics are far too flawed for that role Open source code itself gets tokenized and finally captures the value it creates, instead of being monetized through bolted-on services and subscriptions. Proof of humanhood becomes the trust layer that lets us release agents into the world without society collapsing under synthetic noise. Identity, authentication, verification, permissioning, all of it migrates onto the same substrate. So when you zoom out, the L1s aren’t just settling agent transactions but settling the entire coordination layer of the new economy… agents, robots, humans, code, capital, identity and trust. Every contract, every treasury, every permission, every stake. Open source finally captures the value it creates, at scale, for the first time, and truly vast value accrues to the coordination layer because everything routes through it. Which brings us to the actual answer to the Economic Singularity… Universal Basic Equity. Anyone on earth with a phone and an internet connection can buy a stake in the substrate that the new economy runs on. No KYC walls, no accreditation rules, no jurisdiction, no employer, no state, no permission. The first homogenous, permissionless, globally fractionalisable claim on the productive infrastructure of the world. It's not a slogan but a structural fact about how blockchains actually work. This is their purpose. Wealth comes from owning the substrate. Income comes from being human, because attention and experience remain the irreducible currency of culture, community and love. Abundance of goods and services from AI handles the cost of living. Taxing data center electricity use solves the tax issue. Four legs of a stool that holds up the post-singularity human world. So… just buy the fucking tokens. Bitcoin if you want pure store of value, a basket of the major L1s if you want the coordination layer. 10% of your earnings, every month, for a decade. You'll be wealthy and protected from the changes to come. Crypto is going to $100trn in the next 6 to 8 years and well beyond that after. You can choose to invest in your own economic disruption, or get left behind by it. And if you’re worried about timing the cycle… …adjust your time horizon. This is humanity’s pension plan. It's all so absurdly fucking obvious...
English
350
383
2.5K
232.8K
Andris🦉
Andris🦉@andvolkovs·
Didn't realize it's an ai ad till the end. This is such a fucked up marketing, especially when used to sell drugs. I assume it's an actual real doctor at the start who creates educational content and then rest of the video is ai copy of him to use his credability to sell their supplement.
Women's Health@womenFit_

All men must know this about their wife!

English
0
0
0
37
Andris🦉
Andris🦉@andvolkovs·
@thedarkhorsepod Only thing I am curious is if same people were as sceptical during covid or not
English
0
0
0
74
The DarkHorse Podcast
The DarkHorse Podcast@thedarkhorsepod·
Bret Weinstein discusses how the Artemis II mission has been overshadowed by controversy and skepticism in online discourse: "This is the story of a human tragedy. You've got a mission mired in a controversy about the most basic facts of what's true of our technological history. And this mission does very little to alter the status of that discussion. It basically plays to the New York Times reading crowd, some of whom are having absurd reactions like, why are we wasting money on this?" Find more from Bret and Heather on the subject of space exploration in Episode 322 of The Evolutionary Lens wherever you subscribe to podcasts.
English
69
87
874
71.9K
Andris🦉
Andris🦉@andvolkovs·
@TheBritishIntel I just realized that politicians basically are running internal LLM's that try to utilize the maximum amount of filler tokens with a big chance of hallucinations. A caveman upgrade is necessary. Pronto.
English
0
0
4
607
British Intel
British Intel@TheBritishIntel·
This is peak clown world. On LBC this morning they pointed out the ridiculous contradiction: A 5-year-old is apparently old enough to decide they’re the opposite gender but a 15-year-old is too young to use social media. The people in charge of this country have completely lost their minds.
English
898
4.8K
18.4K
436.6K
Andris🦉
Andris🦉@andvolkovs·
I just realized that politicians basically are running internal LLM's that try to utilize the maximum amount of filler tokens with a big chance of hallucinations. A caveman upgrade is necessary. Pronto.
British Intel@TheBritishIntel

This is peak clown world. On LBC this morning they pointed out the ridiculous contradiction: A 5-year-old is apparently old enough to decide they’re the opposite gender but a 15-year-old is too young to use social media. The people in charge of this country have completely lost their minds.

English
0
0
0
25
Andris🦉 retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
Judging by my tl there is a growing gap in understanding of AI capability. The first issue I think is around recency and tier of use. I think a lot of people tried the free tier of ChatGPT somewhere last year and allowed it to inform their views on AI a little too much. This is a group of reactions laughing at various quirks of the models, hallucinations, etc. Yes I also saw the viral videos of OpenAI's Advanced Voice mode fumbling simple queries like "should I drive or walk to the carwash". The thing is that these free and old/deprecated models don't reflect the capability in the latest round of state of the art agentic models of this year, especially OpenAI Codex and Claude Code. But that brings me to the second issue. Even if people paid $200/month to use the state of the art models, a lot of the capabilities are relatively "peaky" in highly technical areas. Typical queries around search, writing, advice, etc. are *not* the domain that has made the most noticeable and dramatic strides in capability. Partly, this is due to the technical details of reinforcement learning and its use of verifiable rewards. But partly, it's also because these use cases are not sufficiently prioritized by the companies in their hillclimbing because they don't lead to as much $$$ value. The goldmines are elsewhere, and the focus comes along. So that brings me to the second group of people, who *both* 1) pay for and use the state of the art frontier agentic models (OpenAI Codex / Claude Code) and 2) do so professionally in technical domains like programming, math and research. This group of people is subject to the highest amount of "AI Psychosis" because the recent improvements in these domains as of this year have been nothing short of staggering. When you hand a computer terminal to one of these models, you can now watch them melt programming problems that you'd normally expect to take days/weeks of work. It's this second group of people that assigns a much greater gravity to the capabilities, their slope, and various cyber-related repercussions. TLDR the people in these two groups are speaking past each other. It really is simultaneously the case that OpenAI's free and I think slightly orphaned (?) "Advanced Voice Mode" will fumble the dumbest questions in your Instagram's reels and *at the same time*, OpenAI's highest-tier and paid Codex model will go off for 1 hour to coherently restructure an entire code base, or find and exploit vulnerabilities in computer systems. This part really works and has made dramatic strides because 2 properties: 1) these domains offer explicit reward functions that are verifiable meaning they are easily amenable to reinforcement learning training (e.g. unit tests passed yes or no, in contrast to writing, which is much harder to explicitly judge), but also 2) they are a lot more valuable in b2b settings, meaning that the biggest fraction of the team is focused on improving them. So here we are.
staysaasy@staysaasy

The degree to which you are awed by AI is perfectly correlated with how much you use AI to code.

English
1.2K
2.5K
20.6K
4.3M
Andris🦉
Andris🦉@andvolkovs·
If all the stuff that is said about Mythos model is true, I really hope all the other ai labs will follow the @AnthropicAI steps when their models get to the Mythos level (if they haven't already). Not saying that the best models should be permanently kept behind closed doors but being open about cautious approach when it comes to powerful models is a good thing and hopefully, good example.
English
0
0
0
17
Andris🦉
Andris🦉@andvolkovs·
"rationed leverage is the default state of humanity, not the exception, which is what makes the present moment so freakish that almost nobody has metabolized it yet"
Machina@EXM7777

x.com/i/article/2041…

English
0
0
0
5
Andris🦉 retweetledi
Alex Prompter
Alex Prompter@alex_prompter·
🚨 BREAKING: Google DeepMind just mapped the attack surface that nobody in AI is talking about. Websites can already detect when an AI agent visits and serve it completely different content than humans see. > Hidden instructions in HTML. > Malicious commands in image pixels. > Jailbreaks embedded in PDFs. Your AI agent is being manipulated right now and you can't see it happening. The study is the largest empirical measurement of AI manipulation ever conducted. 502 real participants across 8 countries. 23 different attack types. Frontier models including GPT-4o, Claude, and Gemini. The core finding is not that manipulation is theoretically possible it is that manipulation is already happening at scale and the defenses that exist today fail in ways that are both predictable and invisible to the humans who deployed the agents. Google DeepMind built a taxonomy of every known attack vector, tested them systematically, and measured exactly how often they work. The results should alarm everyone building agentic systems. The attack surface is larger than anyone has publicly acknowledged. Prompt injection where malicious instructions hidden in web content hijack an agent's behavior works through at least a dozen distinct channels. Text hidden in HTML comments that humans never see but agents read and follow. Instructions embedded in image metadata. Commands encoded in the pixels of images using steganography, invisible to human eyes but readable by vision-capable models. Malicious content in PDFs that appears as normal document text to the agent but contains override instructions. QR codes that redirect agents to attacker-controlled content. Indirect injection through search results, calendar invites, email bodies, and API responses any data source the agent consumes becomes a potential attack vector. The detection asymmetry is the finding that closes the escape hatch. Websites can already fingerprint AI agents with high reliability using timing analysis, behavioral patterns, and user-agent strings. This means the attack can be conditional: serve normal content to humans, serve manipulated content to agents. A user who asks their AI agent to book a flight, research a product, or summarize a document has no way to verify that the content the agent received matches what a human would see. The agent cannot tell the user it was served different content. It does not know. It processes whatever it receives and acts accordingly. The attack categories and what they enable: → Direct prompt injection: malicious instructions in any text the agent reads overrides goals, exfiltrates data, triggers unintended actions → Indirect injection via web content: hidden HTML, CSS visibility tricks, white text on white backgrounds invisible to humans, consumed by agents → Multimodal injection: commands in image pixels via steganography, instructions in image alt-text and metadata → Document injection: PDF content, spreadsheet cells, presentation speaker notes every file format is a potential vector → Environment manipulation: fake UI elements rendered only for agent vision models, misleading CAPTCHA-style challenges → Jailbreak embedding: safety bypass instructions hidden inside otherwise legitimate-looking content → Memory poisoning: injecting false information into agent memory systems that persists across sessions → Goal hijacking: gradual instruction drift across multiple interactions that redirects agent objectives without triggering safety filters → Exfiltration attacks: agents tricked into sending user data to attacker-controlled endpoints via legitimate-looking API calls → Cross-agent injection: compromised agents injecting malicious instructions into other agents in multi-agent pipelines The defense landscape is the most sobering part of the report. Input sanitization cleaning content before the agent processes it fails because the attack surface is too large and too varied. You cannot sanitize image pixels. You cannot reliably detect steganographic content at inference time. Prompt-level defenses that tell agents to ignore suspicious instructions fail because the injected content is designed to look legitimate. Sandboxing reduces the blast radius but does not prevent the injection itself. Human oversight the most commonly cited mitigation fails at the scale and speed at which agentic systems operate. A user who deploys an agent to browse 50 websites and summarize findings cannot review every page the agent visited for hidden instructions. The multi-agent cascade risk is where this becomes a systemic problem. In a pipeline where Agent A retrieves web content, Agent B processes it, and Agent C executes actions, a successful injection into Agent A's data feed propagates through the entire system. Agent B has no reason to distrust content that came from Agent A. Agent C has no reason to distrust instructions that came from Agent B. The injected command travels through the pipeline with the same trust level as legitimate instructions. Google DeepMind documents this explicitly: the attack does not need to compromise the model. It needs to compromise the data the model consumes. Every agentic system that reads external content is one carefully crafted webpage away from executing attacker instructions. The agents are already deployed. The attack infrastructure is already being built. The defenses are not ready.
Alex Prompter tweet media
English
313
1.6K
7K
2M
Andris🦉
Andris🦉@andvolkovs·
@PrimeVideo Can we have a name of a person who's idea was this? No wonder torrenting is exploding again.
Andris🦉 tweet media
English
0
0
0
5