Himanshu Sahni

54 posts

Himanshu Sahni

Himanshu Sahni

@sahnihim

Leading Science of RL @ Reflection. Previously: Deepmind, Tesla, Facebook, OffWorld, Microsoft. All views are mine.

Katılım Mayıs 2025
195 Takip Edilen460 Takipçiler
Himanshu Sahni
Himanshu Sahni@sahnihim·
Amazing balanced yet optimistic take (as usual!)
Andrej Karpathy@karpathy

A few random notes from claude coding quite a bit last few weeks. Coding workflow. Given the latest lift in LLM coding capability, like many others I rapidly went from about 80% manual+autocomplete coding and 20% agents in November to 80% agent coding and 20% edits+touchups in December. i.e. I really am mostly programming in English now, a bit sheepishly telling the LLM what code to write... in words. It hurts the ego a bit but the power to operate over software in large "code actions" is just too net useful, especially once you adapt to it, configure it, learn to use it, and wrap your head around what it can and cannot do. This is easily the biggest change to my basic coding workflow in ~2 decades of programming and it happened over the course of a few weeks. I'd expect something similar to be happening to well into double digit percent of engineers out there, while the awareness of it in the general population feels well into low single digit percent. IDEs/agent swarms/fallability. Both the "no need for IDE anymore" hype and the "agent swarm" hype is imo too much for right now. The models definitely still make mistakes and if you have any code you actually care about I would watch them like a hawk, in a nice large IDE on the side. The mistakes have changed a lot - they are not simple syntax errors anymore, they are subtle conceptual errors that a slightly sloppy, hasty junior dev might do. The most common category is that the models make wrong assumptions on your behalf and just run along with them without checking. They also don't manage their confusion, they don't seek clarifications, they don't surface inconsistencies, they don't present tradeoffs, they don't push back when they should, and they are still a little too sycophantic. Things get better in plan mode, but there is some need for a lightweight inline plan mode. They also really like to overcomplicate code and APIs, they bloat abstractions, they don't clean up dead code after themselves, etc. They will implement an inefficient, bloated, brittle construction over 1000 lines of code and it's up to you to be like "umm couldn't you just do this instead?" and they will be like "of course!" and immediately cut it down to 100 lines. They still sometimes change/remove comments and code they don't like or don't sufficiently understand as side effects, even if it is orthogonal to the task at hand. All of this happens despite a few simple attempts to fix it via instructions in CLAUDE . md. Despite all these issues, it is still a net huge improvement and it's very difficult to imagine going back to manual coding. TLDR everyone has their developing flow, my current is a small few CC sessions on the left in ghostty windows/tabs and an IDE on the right for viewing the code + manual edits. Tenacity. It's so interesting to watch an agent relentlessly work at something. They never get tired, they never get demoralized, they just keep going and trying things where a person would have given up long ago to fight another day. It's a "feel the AGI" moment to watch it struggle with something for a long time just to come out victorious 30 minutes later. You realize that stamina is a core bottleneck to work and that with LLMs in hand it has been dramatically increased. Speedups. It's not clear how to measure the "speedup" of LLM assistance. Certainly I feel net way faster at what I was going to do, but the main effect is that I do a lot more than I was going to do because 1) I can code up all kinds of things that just wouldn't have been worth coding before and 2) I can approach code that I couldn't work on before because of knowledge/skill issue. So certainly it's speedup, but it's possibly a lot more an expansion. Leverage. LLMs are exceptionally good at looping until they meet specific goals and this is where most of the "feel the AGI" magic is to be found. Don't tell it what to do, give it success criteria and watch it go. Get it to write tests first and then pass them. Put it in the loop with a browser MCP. Write the naive algorithm that is very likely correct first, then ask it to optimize it while preserving correctness. Change your approach from imperative to declarative to get the agents looping longer and gain leverage. Fun. I didn't anticipate that with agents programming feels *more* fun because a lot of the fill in the blanks drudgery is removed and what remains is the creative part. I also feel less blocked/stuck (which is not fun) and I experience a lot more courage because there's almost always a way to work hand in hand with it to make some positive progress. I have seen the opposite sentiment from other people too; LLM coding will split up engineers based on those who primarily liked coding and those who primarily liked building. Atrophy. I've already noticed that I am slowly starting to atrophy my ability to write code manually. Generation (writing code) and discrimination (reading code) are different capabilities in the brain. Largely due to all the little mostly syntactic details involved in programming, you can review code just fine even if you struggle to write it. Slopacolypse. I am bracing for 2026 as the year of the slopacolypse across all of github, substack, arxiv, X/instagram, and generally all digital media. We're also going to see a lot more AI hype productivity theater (is that even possible?), on the side of actual, real improvements. Questions. A few of the questions on my mind: - What happens to the "10X engineer" - the ratio of productivity between the mean and the max engineer? It's quite possible that this grows *a lot*. - Armed with LLMs, do generalists increasingly outperform specialists? LLMs are a lot better at fill in the blanks (the micro) than grand strategy (the macro). - What does LLM coding feel like in the future? Is it like playing StarCraft? Playing Factorio? Playing music? - How much of society is bottlenecked by digital knowledge work? TLDR Where does this leave us? LLM agent capabilities (Claude & Codex especially) have crossed some kind of threshold of coherence around December 2025 and caused a phase shift in software engineering and closely related. The intelligence part suddenly feels quite a bit ahead of all the rest of it - integrations (tools, knowledge), the necessity for new organizational workflows, processes, diffusion more generally. 2026 is going to be a high energy year as the industry metabolizes the new capability.

English
0
0
2
269
Himanshu Sahni retweetledi
Ashlee Vance
Ashlee Vance@ashleevance·
.@MishaLaskin and @reflection_ai have raised $2 billion to build America's next top open source AI model. This week Misha comes on the pod to talk DeepSeek, open weights and AI freedom for all Thanks, as always, to @brexHQ and @e1ventures for backing the @corememory pod
English
5
5
63
14.2K
Himanshu Sahni
Himanshu Sahni@sahnihim·
Key takeaways 👇 🚀 It works: distilling the 32 layer model -> 20 layer model climbs faster and plateaus higher than RL. 💰The whole thing is super easy to run and costs only about $10 per run. 📈Full distribution OPD has slight edge over sample based, at higher compute cost. Finally, OPD is not an RL method: it doesn't interact with the environment and relies on a stronger teacher model. You won’t train frontier models this way, but you can get surprisingly capable smaller models - better than running RL on them!
English
2
1
5
943
Himanshu Sahni
Himanshu Sahni@sahnihim·
@thinkymachines btw, this is also the standard KL penalty used to control drift when training with RL on-policy (if you treat the base model as a “teacher,” ala the continual learning section of the blog post)
Himanshu Sahni tweet media
English
0
0
2
283
Himanshu Sahni
Himanshu Sahni@sahnihim·
Great post by @thinkymachines! Quick reminder, distillation ≠ RL. RL is a framework to learn from environment interactions. Distillation assumes access to a stronger teacher’s logits. That’s why you cannot distill your way to a frontier model, but you can RL there.
Thinking Machines@thinkymachines

Our latest post explores on-policy distillation, a training approach that unites the error-correcting relevance of RL with the reward density of SFT. When training it for math reasoning and as an internal chat assistant, we find that on-policy distillation can outperform other approaches for a fraction of the cost. thinkingmachines.ai/blog/on-policy…

English
1
0
1
459
Himanshu Sahni
Himanshu Sahni@sahnihim·
Karpathy pod: RL is so over Julian pod: we’re so back
English
0
0
5
260
Himanshu Sahni
Himanshu Sahni@sahnihim·
@karpathy 3. Credit assignment (or long horizon problem) - this one we need to figure out in research. But rl has the basic structure to support value functions. The point is, the path is clear for infinite improvement. Why would you doubt it?
English
0
0
1
170
Himanshu Sahni
Himanshu Sahni@sahnihim·
@karpathy 2. Where do environments come from? we have one for free - the universe! We have to figure out how to scale real world interactions.
English
1
0
1
169
Himanshu Sahni
Himanshu Sahni@sahnihim·
some people are mistakenly saying @karpathy bashed on rl, but compression (aka world modeling aka next token/image prediction) and reinforcement learning are the only two engines we know of the can scale intelligence unboundedly. One to understand the world and other to act in it
English
1
0
2
203
Himanshu Sahni
Himanshu Sahni@sahnihim·
I actually really liked the karpathy podcast. Probably helps not being in a big agi lab to develop such a balanced perspective.
English
0
0
2
157
Himanshu Sahni
Himanshu Sahni@sahnihim·
I said 1000s of GPUs instead of O(1000s) and they kicked me out of sf but then I said I liked the karpathy podcast and they let me back in (I’m in london)
English
1
0
3
320
Rishi Mehta
Rishi Mehta@rishicomplex·
I hadn't realized till the @karpathy interview that significant computing breakthroughs like PCs and the internet don't show up as a discontinuity in GDP growth. It's just a smooth exponential. Noticeable uptick post-WW2 from ~1.8% to ~3.6% but stable since.
Rishi Mehta tweet mediaRishi Mehta tweet media
Dwarkesh Patel@dwarkesh_sp

The @karpathy interview 0:00:00 – AGI is still a decade away 0:30:33 – LLM cognitive deficits 0:40:53 – RL is terrible 0:50:26 – How do humans learn? 1:07:13 – AGI will blend into 2% GDP growth 1:18:24 – ASI 1:33:38 – Evolution of intelligence & culture 1:43:43 - Why self driving took so long 1:57:08 - Future of education Look up Dwarkesh Podcast on YouTube, Apple Podcasts, Spotify, etc. Enjoy!

English
30
16
317
90K
Himanshu Sahni
Himanshu Sahni@sahnihim·
3½ years ago I walked into this building to start my dream job. Today was my last day at DeepMind. Grateful for everything I learned and everyone I met. 🙏🏾 Now time to dream bigger.
Himanshu Sahni tweet media
English
0
0
8
289