Rouven

1.1K posts

Rouven

Rouven

@rh7

Decentralization Enthusiast. Decentralized Identity, Reputation & Governance.

San Juan Katılım Mayıs 2008
1.4K Takip Edilen1.5K Takipçiler
Rouven retweetledi
Ihtesham Ali
Ihtesham Ali@ihtesham2005·
I accidentally discovered how to compress a semester of learning into 48 hours. A grad student at MIT showed me his NotebookLM setup. I thought he was just organized. Then I watched him pass a qualifying exam on a subject he'd never studied before. Here's exactly what he did: First: he didn't upload a textbook. He uploaded 6 textbooks, 15 research papers, and every lecture transcript he could find on the subject. Then he asked NotebookLM one question: "What are the 5 core mental models that every expert in this field shares?" Not "summarize this." Not "explain this topic." Mental models. The stuff that takes professors years to develop. But the next part is what broke my brain. He followed up with: "Now show me the 3 places where experts in this field fundamentally disagree, and what each side's strongest argument is." In 20 minutes he had a map of the entire intellectual landscape of the field: the debates, the consensus, the open questions. Most students spend a full semester just figuring out what those debates even are. Then he did something I've never seen before. He asked: "Generate 10 questions that would expose whether someone deeply understands this subject versus someone who just memorized facts." He spent the next 6 hours answering those questions using the source material. Every wrong answer triggered a follow-up: "Explain why this is wrong and what I'm missing." By hour 48, he could hold a conversation with his thesis advisor without getting destroyed. The tool didn't change. The questions did. Most people treat NotebookLM like a fancy highlighter. These students are using it like a private tutor who has read everything ever written on the subject. The difference between a semester and 48 hours isn't the amount of content. It's knowing which questions to ask.
Ihtesham Ali tweet media
English
245
2.5K
16.5K
4.8M
Rouven retweetledi
Tech Layoff Tracker
Tech Layoff Tracker@TechLayoffLover·
Just got this DM from a follower: Hey dude, I need to vent this to someone who gets it. I've been at this Big Tech company (you know the one) for almost 6 years now—senior SWE, TC around $350k last year with RSUs still vesting. Thought I was bulletproof after surviving the 2023-2024 bloodbaths and then pivoting hard into the AI org. But fuck, the ground is shifting under my feet faster than I can keep up. Last week in our all-hands, leadership was bragging about how the team's "AI leverage ratio" hit 4.2x—meaning each engineer is now shipping what used to take a team of four. They showed the metrics: feature velocity up 180% YoY while headcount's down another 22% since Q4 '25. The slide literally had a photo of Cursor + Claude Sonnet 4 workflows replacing entire squads. Everyone clapped like trained seals, but I saw three faces go pale—they're the mid-level folks who just finished documenting their entire codebase for the "knowledge distillation" project. My direct report, this solid L5 who joined right after me, got put on a 30-day PIP after his productivity dashboard dipped below the new AI-augmented benchmark. The benchmark? It's literally what the offshore team in India hits using the exact prompts he used to write. He trained them on our internal style guide last quarter—now they're outperforming him at $28/hour all-in. He told me privately he's burning through savings and eyeing real estate licensing because "at least houses don't get refactored by agents overnight." The internal job board is a ghost town. Entry-level SWE roles? Frozen since mid-'25. What few postings go up are tagged "AI-native preferred" and get 2,000+ apps in hours, mostly from people already on H-1Bs or contractors. Meanwhile, they're quietly converting more mid-tier positions to "AI orchestration" contractors—$90-110/hour remote from LATAM or Eastern Europe, no benefits, 6-month contracts. My manager admitted in 1:1 that if the next Grok/Claude/Anthropic release closes the last 10-15% quality gap, we'll probably cut another layer. I'm hanging on because I'm one of the ones who owns the prompt libraries and fine-tuning pipelines now. They need humans to babysit the models until the self-improving loops actually work without constant human intervention. But I see the writing: every time we make the system more autonomous, we make our own roles more optional. The alumni Slack is full of 2024-2025 grads DMing for coffee chats because their referrals bounce—67% underemployed or gigging according to the last poll. One kid I mentored last year is back living with parents after burning through his signing bonus. I used to tell people "just upskill in AI, you'll be fine." Now I feel like a fraud saying it. If I lost this tomorrow, I'd be competing with the same offshore talent I've been helping scale, plus a flood of recently "managed out" seniors. My emergency fund is decent, but the mortgage isn't. Thinking about side hustles in trades or something offline—plumbing, electrical, anything that can't be prompted away. This feels like watching the industry eat itself from the inside while pretending it's evolution. You still feeling secure over there, or is it hitting your shop too? Need to hear I'm not going insane.
Tech Layoff Tracker tweet mediaTech Layoff Tracker tweet mediaTech Layoff Tracker tweet mediaTech Layoff Tracker tweet media
English
139
432
2.8K
293.9K
Rouven retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
It is hard to communicate how much programming has changed due to AI in the last 2 months: not gradually and over time in the "progress as usual" way, but specifically this last December. There are a number of asterisks but imo coding agents basically didn’t work before December and basically work since - the models have significantly higher quality, long-term coherence and tenacity and they can power through large and long tasks, well past enough that it is extremely disruptive to the default programming workflow. Just to give an example, over the weekend I was building a local video analysis dashboard for the cameras of my home so I wrote: “Here is the local IP and username/password of my DGX Spark. Log in, set up ssh keys, set up vLLM, download and bench Qwen3-VL, set up a server endpoint to inference videos, a basic web ui dashboard, test everything, set it up with systemd, record memory notes for yourself and write up a markdown report for me”. The agent went off for ~30 minutes, ran into multiple issues, researched solutions online, resolved them one by one, wrote the code, tested it, debugged it, set up the services, and came back with the report and it was just done. I didn’t touch anything. All of this could easily have been a weekend project just 3 months ago but today it’s something you kick off and forget about for 30 minutes. As a result, programming is becoming unrecognizable. You’re not typing computer code into an editor like the way things were since computers were invented, that era is over. You're spinning up AI agents, giving them tasks *in English* and managing and reviewing their work in parallel. The biggest prize is in figuring out how you can keep ascending the layers of abstraction to set up long-running orchestrator Claws with all of the right tools, memory and instructions that productively manage multiple parallel Code instances for you. The leverage achievable via top tier "agentic engineering" feels very high right now. It’s not perfect, it needs high-level direction, judgement, taste, oversight, iteration and hints and ideas. It works a lot better in some scenarios than others (e.g. especially for tasks that are well-specified and where you can verify/test functionality). The key is to build intuition to decompose the task just right to hand off the parts that work and help out around the edges. But imo, this is nowhere near "business as usual" time in software.
English
1.6K
4.8K
37.3K
5M
Rouven retweetledi
Chris
Chris@chatgpt21·
GPT-5.1 (Thinking High) is about 300 times cheaper per task than o3-preview (Low) while scoring only a few points lower on ARC-AGI-1. 1 year later intelligence has gotten 300 times cheaper. This is why I can’t stand people who say “wahh the models too expensive” it will become cheaper.
Chris tweet mediaChris tweet media
English
160
266
2.6K
1.5M
Rouven retweetledi
NIK
NIK@ns123abc·
🚨 BREAKING: IBM stock down 13% after Anthropic announced that Claude can streamline COBOL code IBM’s entire business model: >maintaining legacy COBOL nobody understands >claude: “I can read it” >IBM stock immediately drops -13% >$40B market cap EVAPORATED Dario strikes again 💀
NIK tweet mediaNIK tweet media
English
624
1.4K
14.2K
2.5M
Rouven retweetledi
Boris Cherny
Boris Cherny@bcherny·
Introducing: built-in git worktree support for Claude Code Now, agents can run in parallel without interfering with one other. Each agent gets its own worktree and can work independently. The Claude Code Desktop app has had built-in support for worktrees for a while, and now we're bringing it to CLI too. Learn more about worktrees: git-scm.com/docs/git-workt…
Boris Cherny tweet media
English
439
851
11K
1.3M
Rouven retweetledi
vitalik.eth
vitalik.eth@VitalikButerin·
Two years ago, I wrote this post on the possible areas that I see for ethereum + AI intersections: vitalik.eth.limo/general/2024/0… This is a topic that many people are excited about, but where I always worry that we think about the two from completely separate philosophical perspectives. I am reminded of Toly's recent tweet that I should "work on AGI". I appreciate the compliment, for him to think that I am capable of contributing to such a lofty thing. However, I get this feeling that the frame of "work on AGI" itself contains an error: it is fundamentally undifferentiated, and has the connotation of "do the thing that, if you don't do it, someone else will do anyway two months later; the main difference is that you get to be the one at the top" (though this may not have been Toly's intention). It would be like describing Ethereum as "working in finance" or "working on computing". To me, Ethereum, and my own view of how our civilization should do AGI, are precisely about choosing a positive direction rather than embracing undifferentiated acceleration of the arrow, and also I think it's actually important to integrate the crypto and AI perspectives. I want an AI future where: * We foster human freedom and empowerment (ie. we avoid both humans being relegated to retirement by AIs, and permanently stripped of power by human power structures that become impossible to surpass or escape) * The world does not blow up (both "classic" superintelligent AI doom, and more chaotic scenarios from various forms of offense outpacing defense, cf. the four defense quadrants from the d/acc posts) In the long term, this may involve crazy things like humans uploading or merging with AI, for those who want to be able to keep up with highly intelligent entities that can think a million times faster on silicon substrate. In the shorter term, it involves much more "ordinary" ideas, but still ideas that require deep rethinking compared to previous computing paradigms. So now, my updated view, which definitely focuses on that shorter term, and where Ethereum plays an important role but is only one piece of a bigger puzzle: # Building tooling to make more trustless and/or private interaction with AIs possible. This includes: * Local LLM tooling * ZK-payment for API calls (so you can call remote models without linking your identity from call to call) * Ongoing work into cryptographic ways to improve AI privacy * Client-side verification of cryptographic proofs, TEE attestations, and any other forms of server-side assurance Basically, the kinds of things we might also build for non-LLM compute (see eg. my ethereum privacy roadmap from a year ago ethereum-magicians.org/t/a-maximally-… ), but for LLM calls as the compute we are protecting. # Ethereum as an economic layer for AI-related interactions This includes: * API calls * Bots hiring bots * Security deposits, potentially eventually more complicated contraptions like onchain dispute resolution * ERC-8004, AI reputation ideas The goal here is to enable AIs to interact economically, which makes viable more decentralized AI architectures (as opposed to non-economic coordination between AIs that are all designed and run by one organization "in-house"). Economies not for the sake of economies, but to enable more decentralized authority. # Make the cypherpunk "mountain man" vision a reality Basically, take the vision that cypherpunk radicals have always dreamed of (don't trust; verify everything), that has been nonviable in reality because humans are never actually going to verify all the code ourselves. Now, we can finally make that vision happen, with LLMs doing the hard parts. This includes: * Interacting with ethereum apps without needing third party UIs * Having a local model propose transactions for you on its own * Having a local model verify transactions created by dapp UIs * Local smart contract auditing, and assistance interpreting the meaning of FV proofs provided by others * Verifying trust models of applications and protocols # Make much better markets and governance a reality Prediction and decision markets, decentralized governance, quadratic voting, combinatorial auctions, universal barter economy, and all kinds of constructions are all beautiful in theory, but have been greatly hampered in reality by one big constraint: limits to human attention and decision-making power. LLMs remove that limitation, and massively scale human judgement. Hence, we can revisit all of those ideas. These are all things that Ethereum can help to make a reality. They are also ideas that are in the d/acc spirit: enabling decentralized cooperation, and improving defense. We can revisit the best ideas from 2014, and add on top many more new and better ones, and with AI (and ZK) we have a whole new set of tools to make them come to life. We can describe the above as a 2x2 chart. There's a lot to build!
vitalik.eth tweet media
English
679
665
3.4K
688.1K
Rouven retweetledi
Claude
Claude@claudeai·
Introducing Claude Opus 4.6. Our smartest model got an upgrade. Opus 4.6 plans more carefully, sustains agentic tasks for longer, operates reliably in massive codebases, and catches its own mistakes. It’s also our first Opus-class model with 1M token context in beta.
English
1.7K
4.8K
39.6K
10.5M
Rouven retweetledi
Tom Warren
Tom Warren@tomwarren·
Anthropic just took a big swipe at OpenAI's decision to put ads in ChatGPT. Anthropic is airing ads mocking ChatGPT ads during the Super Bowl, and they're hilarious 😅 Anthropic is also committing to no ads in Claude theverge.com/ai-artificial-…
English
683
2K
23.3K
3.6M
Rouven retweetledi
SpaceX
SpaceX@SpaceX·
SpaceX has acquired xAI, forming one of the most ambitious, vertically integrated innovation engines on (and off) Earth → #xai-joins-spacex" target="_blank" rel="nofollow noopener">spacex.com/updates#xai-jo…
SpaceX tweet media
English
3.9K
8.3K
45.9K
19.2M
Rouven retweetledi
vitalik.eth
vitalik.eth@VitalikButerin·
In these five years, the Ethereum Foundation is entering a period of mild austerity, in order to be able to simultaneously meet two goals: 1. Deliver on an aggressive roadmap that ensures Ethereum's status as a performant and scalable world computer that does not compromise on robustness, sustainability and decentralization. 2. Ensures the Ethereum Foundation's own ability to sustain into the long term, and protect Ethereum's core mission and goals, including both the core blockchain layer as well as users' ability to access and use the chain with self-sovereignty, security and privacy. To this end, my own share of the austerity is that I am personally taking on responsibilities that might in another time have been "special projects" of the EF. Specifically, we are seeking the existence of an open-source, secure and verifiable full stack of software and hardware that can protect both our personal lives and our public environments ( see vitalik.eth.limo/general/2025/0… ). This includes applications such as finance, communication and governance, blockchains, operating systems, secure hardware, biotech (including both personal and public health), and more. If you have seen the Vensa announcement (seeking to make open silicon a commercially viable reality at least for security-critical applications), the ucritter.com including recent versions with built in ZK + FHE + differential-privacy features, the air quality work, my donations to encrypted messaging apps, my own enthusiasm and use for privacy-preserving, walkaway-test-friendly and local-first software (including operating systems), then you know the general spirit of what I am planning to support. For this reason I have just withdrawn 16,384 ETH, which will be deployed toward these goals over the next few years. I am also exploring secure decentralized staking options that will allow even more capital from staking rewards to be put toward these goals in the long term. Ethereum itself is an indispensable part of the "full-stack openness and verifiability" vision. The Ethereum Foundation will continue with a steadfast focus on developing Ethereum, with that goal in mind. "Ethereum everywhere" is nice, but the primary priority is "Ethereum for people who need it". Not corposlop, but self-sovereignty, and the baseline infrastructure that enables cooperation without domination. In a world where many people's default mindset is that we need to race to become a big strong bully, because otherwise the existing big strong bullies will eat you first, this is the needed alternative. It will involve much more than technology to succeed, but the technical layer is something which is in our control to make happen. The tools to ensure your, and your community's, autonomy and safety, as a basic right that belongs to everyone. Open not in a bullshit "open means everyone has the right to buy it from us and use our API for $200/month" way, but actually open, and secure and verifiable so that you know that your technology is working for you.
English
785
619
4.3K
873.4K
Rouven retweetledi
Davide Crapis
Davide Crapis@DavideCrapis·
Ethereum is in the unique position to be the platform that secures and settles AI-to-AI interactions. The ERC-8004 standard is coming to mainnet.
English
285
430
2.3K
889.9K
Rouven retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
A few random notes from claude coding quite a bit last few weeks. Coding workflow. Given the latest lift in LLM coding capability, like many others I rapidly went from about 80% manual+autocomplete coding and 20% agents in November to 80% agent coding and 20% edits+touchups in December. i.e. I really am mostly programming in English now, a bit sheepishly telling the LLM what code to write... in words. It hurts the ego a bit but the power to operate over software in large "code actions" is just too net useful, especially once you adapt to it, configure it, learn to use it, and wrap your head around what it can and cannot do. This is easily the biggest change to my basic coding workflow in ~2 decades of programming and it happened over the course of a few weeks. I'd expect something similar to be happening to well into double digit percent of engineers out there, while the awareness of it in the general population feels well into low single digit percent. IDEs/agent swarms/fallability. Both the "no need for IDE anymore" hype and the "agent swarm" hype is imo too much for right now. The models definitely still make mistakes and if you have any code you actually care about I would watch them like a hawk, in a nice large IDE on the side. The mistakes have changed a lot - they are not simple syntax errors anymore, they are subtle conceptual errors that a slightly sloppy, hasty junior dev might do. The most common category is that the models make wrong assumptions on your behalf and just run along with them without checking. They also don't manage their confusion, they don't seek clarifications, they don't surface inconsistencies, they don't present tradeoffs, they don't push back when they should, and they are still a little too sycophantic. Things get better in plan mode, but there is some need for a lightweight inline plan mode. They also really like to overcomplicate code and APIs, they bloat abstractions, they don't clean up dead code after themselves, etc. They will implement an inefficient, bloated, brittle construction over 1000 lines of code and it's up to you to be like "umm couldn't you just do this instead?" and they will be like "of course!" and immediately cut it down to 100 lines. They still sometimes change/remove comments and code they don't like or don't sufficiently understand as side effects, even if it is orthogonal to the task at hand. All of this happens despite a few simple attempts to fix it via instructions in CLAUDE . md. Despite all these issues, it is still a net huge improvement and it's very difficult to imagine going back to manual coding. TLDR everyone has their developing flow, my current is a small few CC sessions on the left in ghostty windows/tabs and an IDE on the right for viewing the code + manual edits. Tenacity. It's so interesting to watch an agent relentlessly work at something. They never get tired, they never get demoralized, they just keep going and trying things where a person would have given up long ago to fight another day. It's a "feel the AGI" moment to watch it struggle with something for a long time just to come out victorious 30 minutes later. You realize that stamina is a core bottleneck to work and that with LLMs in hand it has been dramatically increased. Speedups. It's not clear how to measure the "speedup" of LLM assistance. Certainly I feel net way faster at what I was going to do, but the main effect is that I do a lot more than I was going to do because 1) I can code up all kinds of things that just wouldn't have been worth coding before and 2) I can approach code that I couldn't work on before because of knowledge/skill issue. So certainly it's speedup, but it's possibly a lot more an expansion. Leverage. LLMs are exceptionally good at looping until they meet specific goals and this is where most of the "feel the AGI" magic is to be found. Don't tell it what to do, give it success criteria and watch it go. Get it to write tests first and then pass them. Put it in the loop with a browser MCP. Write the naive algorithm that is very likely correct first, then ask it to optimize it while preserving correctness. Change your approach from imperative to declarative to get the agents looping longer and gain leverage. Fun. I didn't anticipate that with agents programming feels *more* fun because a lot of the fill in the blanks drudgery is removed and what remains is the creative part. I also feel less blocked/stuck (which is not fun) and I experience a lot more courage because there's almost always a way to work hand in hand with it to make some positive progress. I have seen the opposite sentiment from other people too; LLM coding will split up engineers based on those who primarily liked coding and those who primarily liked building. Atrophy. I've already noticed that I am slowly starting to atrophy my ability to write code manually. Generation (writing code) and discrimination (reading code) are different capabilities in the brain. Largely due to all the little mostly syntactic details involved in programming, you can review code just fine even if you struggle to write it. Slopacolypse. I am bracing for 2026 as the year of the slopacolypse across all of github, substack, arxiv, X/instagram, and generally all digital media. We're also going to see a lot more AI hype productivity theater (is that even possible?), on the side of actual, real improvements. Questions. A few of the questions on my mind: - What happens to the "10X engineer" - the ratio of productivity between the mean and the max engineer? It's quite possible that this grows *a lot*. - Armed with LLMs, do generalists increasingly outperform specialists? LLMs are a lot better at fill in the blanks (the micro) than grand strategy (the macro). - What does LLM coding feel like in the future? Is it like playing StarCraft? Playing Factorio? Playing music? - How much of society is bottlenecked by digital knowledge work? TLDR Where does this leave us? LLM agent capabilities (Claude & Codex especially) have crossed some kind of threshold of coherence around December 2025 and caused a phase shift in software engineering and closely related. The intelligence part suddenly feels quite a bit ahead of all the rest of it - integrations (tools, knowledge), the necessity for new organizational workflows, processes, diffusion more generally. 2026 is going to be a high energy year as the industry metabolizes the new capability.
English
1.6K
5.4K
39.4K
7.6M
Rouven retweetledi
vitalik.eth
vitalik.eth@VitalikButerin·
Now that ZKEVMs are at alpha stage (production-quality performance, remaining work is safety) and PeerDAS is live on mainnet, it's time to talk more about what this combination means for Ethereum. These are not minor improvements; they are shifting Ethereum into being a fundamentally new and more powerful kind of decentralized network. To see why, let's look at the two major types of p2p network so far: BitTorrent (2000): huge total bandwidth, highly decentralized, no consensus Bitcoin (2009): highly decentralized, consensus, but low bandwidth - because it’s not “distributed” in the sense of work being split up, it’s *replicated* Now, Ethereum with PeerDAS (2025) and ZK-EVMs (expect small portions of the network using it in 2026), we get: decentralized, consensus and high bandwidth The trilemma has been solved - not on paper, but with live running code, of which one half (data availability sampling) is *on mainnet today*, and the other half (ZK-EVMs) is *production-quality on performance today* - safety is what remains. This was a 10-year journey (see the first commit of my original post on DAS here: github.com/ethereum/resea… , and ZK-EVM attempts started in ~2020), but it's finally here. Over the next ~4 years, expect to see the full extent of this vision roll out: * In 2026, large non-ZKEVM-dependent gas limit increases due to BALs and ePBS, and we'll see the first opportunities to run a ZKEVM node * In 2026-28, gas repricings, changes to state structure, exec payload going into blobs, and other adjustments to make higher gas limits safe * In 2027-30, large further gas limit increases, as ZKEVM becomes the primary way to validate blocks on the network A third piece of this is distributed block building. A long-term ideal holy grail is to get to a future where the full block is *never* constituted in one single place. This will not be necessary for a long time, but IMO it is worth striving for us at least have the capability to do that. Even before that point, we want the meaningful authority in block building to be as distributed as possible. This can be done either in-protocol (eg. maybe we figure out how to expand FOCIL to make it a primary channel for txs), or out-of-protocol with distributed builder marketplaces. This reduces risk of centralized interference with real-time transaction inclusion, AND it creates a better environment for geographical fairness. Onward.
English
1.1K
1.4K
7.3K
1.3M
Rouven retweetledi
_gabrielShapir0
_gabrielShapir0@lex_node·
@VitalikButerin I'm so grateful for you and Ethereum, Vitalik. It's what brought me into crypto and what keeps me here. You could've cashed out and disappeared a long time ago but you are still fighting for cypherpunk values.
English
10
14
335
26.4K