Parand Darugar

14.2K posts

Parand Darugar banner
Parand Darugar

Parand Darugar

@parand

Mostly startups, mostly technical. Machine learning and such. Forbes 7 under 7. When vision looks forward it sees me ™

San Diego, CA Beigetreten Şubat 2007
964 Folgt1.1K Follower
Parand Darugar
Parand Darugar@parand·
Thankful. Despite all the madness.
English
0
0
0
21
Parand Darugar
Parand Darugar@parand·
I suspect there's a path for franken-assembling pre-trained LLM components (layer stacks) and doing a bit more training to significantly shortcut pre-training of LLMs.
English
0
0
0
23
Parand Darugar
Parand Darugar@parand·
*Super* interesting post on LLM Neuroanatomy and the internals of LLMs, achieving meaningfully better performance with no extra training, just re-running a few layers and feeding the output to earlier layers. dnhkng.github.io/posts/rys/
English
1
0
0
44
Parand Darugar retweetet
Andrej Karpathy
Andrej Karpathy@karpathy·
Three days ago I left autoresearch tuning nanochat for ~2 days on depth=12 model. It found ~20 changes that improved the validation loss. I tested these changes yesterday and all of them were additive and transferred to larger (depth=24) models. Stacking up all of these changes, today I measured that the leaderboard's "Time to GPT-2" drops from 2.02 hours to 1.80 hours (~11% improvement), this will be the new leaderboard entry. So yes, these are real improvements and they make an actual difference. I am mildly surprised that my very first naive attempt already worked this well on top of what I thought was already a fairly manually well-tuned project. This is a first for me because I am very used to doing the iterative optimization of neural network training manually. You come up with ideas, you implement them, you check if they work (better validation loss), you come up with new ideas based on that, you read some papers for inspiration, etc etc. This is the bread and butter of what I do daily for 2 decades. Seeing the agent do this entire workflow end-to-end and all by itself as it worked through approx. 700 changes autonomously is wild. It really looked at the sequence of results of experiments and used that to plan the next ones. It's not novel, ground-breaking "research" (yet), but all the adjustments are "real", I didn't find them manually previously, and they stack up and actually improved nanochat. Among the bigger things e.g.: - It noticed an oversight that my parameterless QKnorm didn't have a scaler multiplier attached, so my attention was too diffuse. The agent found multipliers to sharpen it, pointing to future work. - It found that the Value Embeddings really like regularization and I wasn't applying any (oops). - It found that my banded attention was too conservative (i forgot to tune it). - It found that AdamW betas were all messed up. - It tuned the weight decay schedule. - It tuned the network initialization. This is on top of all the tuning I've already done over a good amount of time. The exact commit is here, from this "round 1" of autoresearch. I am going to kick off "round 2", and in parallel I am looking at how multiple agents can collaborate to unlock parallelism. github.com/karpathy/nanoc… All LLM frontier labs will do this. It's the final boss battle. It's a lot more complex at scale of course - you don't just have a single train. py file to tune. But doing it is "just engineering" and it's going to work. You spin up a swarm of agents, you have them collaborate to tune smaller models, you promote the most promising ideas to increasingly larger scales, and humans (optionally) contribute on the edges. And more generally, *any* metric you care about that is reasonably efficient to evaluate (or that has more efficient proxy metrics such as training a smaller network) can be autoresearched by an agent swarm. It's worth thinking about whether your problem falls into this bucket too.
Andrej Karpathy tweet media
English
974
2.1K
19.4K
3.6M
Parand Darugar
Parand Darugar@parand·
Martyr! by Kaveh Akbar is beautiful.
English
0
0
0
68
Parand Darugar
Parand Darugar@parand·
So I guess streaming services figured out showing you the same title in 19 different categories optimizes viewing time? Or do they only have the 10 titles they shove into every possible list?
English
0
0
0
39
Parand Darugar
Parand Darugar@parand·
@faraz_r_khan It’s a fantastic coder but an awful architect. These days I write zero lines of code - Claude does a great job of that. But left to its own devices it makes ridiculous decisions on approach and design, particularly if its first idea doesn’t pan out.
English
1
0
1
23
Faraz Khan
Faraz Khan@faraz_r_khan·
@parand Maybe yours is sad or something mine is smarter than 95% of the coders I’ve worked with in my life.
English
1
0
2
70
Parand Darugar
Parand Darugar@parand·
Arguing with Claude Code is like arguing with an idiot savant. If you look at the "thinking" details you'll see it's more idiot than savant.
English
1
0
2
116
Parand Darugar
Parand Darugar@parand·
I'm looking for a quote I don't recall from a person I don't recall in a podcast I don't recall, but I have a vague general recollection of, and I'm upset that ChatGPT can't find it for me.
English
0
0
0
53
Parand Darugar
Parand Darugar@parand·
Shocked to find wuthering heights is only two hours and 15 minutes long, because it was the longest nine hours of my life. Congrats to whatever edgy 14-year-old wrote the screen play.
English
0
0
0
132
Parand Darugar
Parand Darugar@parand·
@mark_a_phelps Have you seen klavis.ai ? Similar goal. The idea is very useful and very effective, we're doing our own internal version of it, would love to get it off the shelf. If you're looking for testers let me know, we work with a lot of MCP servers.
English
1
0
1
24
Parand Darugar
Parand Darugar@parand·
I am, in so many deep and meaningful ways, an idiot.
English
0
0
0
44
Parand Darugar
Parand Darugar@parand·
The videos of the latest shooting in Minneapolis are horrific.
English
0
0
0
143
Parand Darugar
Parand Darugar@parand·
I'm convinced one integration test is worth a thousand unit tests. Unit tests test small bits of logic. That's generally not where the gnarly bugs are. The ugly bugs are mostly in the interactions of the various systems.
English
0
0
0
54
Parand Darugar
Parand Darugar@parand·
Almost got hit by a UPS truck running a red light, then almost got hit by a FedEx truck turning without stopping on a right, then almost got hit by a crazy lady with a glitter bedazzled car full of junk. Something’s in the air today.
English
0
0
0
37
Parand Darugar
Parand Darugar@parand·
I swear Claude Code has gotten slower. Either that or Antigravity is faster and I've gotten spoiled (I'm using both on the same project, switching back and forth).
English
0
0
0
44
Parand Darugar
Parand Darugar@parand·
The Streisand effect is now bigger than Streisand.
English
0
0
0
26
Parand Darugar retweetet
Aaron Levie
Aaron Levie@levie·
The capability overhang right now in AI is pretty massive. Most of the world still thinks of AI as chatbots that will answer a question on demand but not yet do real work for them. Beyond coding, almost no knowledge work has had any real agentic automation applied to it yet. The past quarter of model updates is going to open up an all new AI agent use-cases across nearly every industry. The winners will be those that can figure out how to wrap the models in the right agent scaffolding, provide the agent the right data to work with context engineering, and deliver the change management that actually drives the change in workflow for the customer. This is what 2026 will be about.
frankie@FrankieIsLost

there’s a billion dollars inside the opus 4.5 model weights and you just need to type the right claude code prompts to get them out

English
131
132
1.4K
254.5K
Parand Darugar
Parand Darugar@parand·
What's the opposite of "YOLO" AI assisted coding? I'm engaged in deep NOYOLO coding, where I spend an hour collaborating and arguing with Claude and Gemini on the design before any code gets written.
English
0
0
0
33