Ben Livshits

1.4K posts

Ben Livshits banner
Ben Livshits

Ben Livshits

@convoluted_code

A technology executive, Computer scientist, Professor. A decade at Microsoft Research, executive roles at Brave, ZKSync, Eclipse.

Worldwide Tham gia Nisan 2015
3.7K Đang theo dõi9.1K Người theo dõi
Ben Livshits
Ben Livshits@convoluted_code·
@_weidai @apruden08 But interesting, yes..: I feel this would have been good 4-5 years ago, post Algorand
English
1
0
0
23
Wei Dai
Wei Dai@_weidai·
Is it possible to build "proof-of-useful-work" on top of autoresearch? There's already great compute-versus-verification asymmetry that is tunable. Would need a reliable way to generate fresh & independent puzzles (that are still useful). Maybe a dead end, but someone should look into if decentralized consensus with useful work is possible on top of autoresearch. Let me know if you solve this.
Andrej Karpathy@karpathy

Three days ago I left autoresearch tuning nanochat for ~2 days on depth=12 model. It found ~20 changes that improved the validation loss. I tested these changes yesterday and all of them were additive and transferred to larger (depth=24) models. Stacking up all of these changes, today I measured that the leaderboard's "Time to GPT-2" drops from 2.02 hours to 1.80 hours (~11% improvement), this will be the new leaderboard entry. So yes, these are real improvements and they make an actual difference. I am mildly surprised that my very first naive attempt already worked this well on top of what I thought was already a fairly manually well-tuned project. This is a first for me because I am very used to doing the iterative optimization of neural network training manually. You come up with ideas, you implement them, you check if they work (better validation loss), you come up with new ideas based on that, you read some papers for inspiration, etc etc. This is the bread and butter of what I do daily for 2 decades. Seeing the agent do this entire workflow end-to-end and all by itself as it worked through approx. 700 changes autonomously is wild. It really looked at the sequence of results of experiments and used that to plan the next ones. It's not novel, ground-breaking "research" (yet), but all the adjustments are "real", I didn't find them manually previously, and they stack up and actually improved nanochat. Among the bigger things e.g.: - It noticed an oversight that my parameterless QKnorm didn't have a scaler multiplier attached, so my attention was too diffuse. The agent found multipliers to sharpen it, pointing to future work. - It found that the Value Embeddings really like regularization and I wasn't applying any (oops). - It found that my banded attention was too conservative (i forgot to tune it). - It found that AdamW betas were all messed up. - It tuned the weight decay schedule. - It tuned the network initialization. This is on top of all the tuning I've already done over a good amount of time. The exact commit is here, from this "round 1" of autoresearch. I am going to kick off "round 2", and in parallel I am looking at how multiple agents can collaborate to unlock parallelism. github.com/karpathy/nanoc… All LLM frontier labs will do this. It's the final boss battle. It's a lot more complex at scale of course - you don't just have a single train. py file to tune. But doing it is "just engineering" and it's going to work. You spin up a swarm of agents, you have them collaborate to tune smaller models, you promote the most promising ideas to increasingly larger scales, and humans (optionally) contribute on the edges. And more generally, *any* metric you care about that is reasonably efficient to evaluate (or that has more efficient proxy metrics such as training a smaller network) can be autoresearched by an agent swarm. It's worth thinking about whether your problem falls into this bucket too.

English
17
0
39
37.1K
Ben Livshits đã retweet
Arvind Narayanan
Arvind Narayanan@random_walker·
I find Anthropic's behavior perplexing. Anyone who does serious research with these models knows that they don't have stable desires or preferences. Tweak the question slightly and get a different answer. Note that this is a simple empirical observation about model behavior, completely separate from the question of whether models are moral agents with preferences worth respecting. Surely people at Anthropic know this. Why do they persist with this wacky stuff?
Arvind Narayanan tweet media
English
116
52
583
90.3K
Ben Livshits
Ben Livshits@convoluted_code·
@johnzabroski Maybe so… multi-step scenarios operate generally fallowing similar principles
English
1
0
0
7
John Zabroski
John Zabroski@johnzabroski·
@convoluted_code I think, however, the high leverage play for attackers is building post-training models that learn "gadgets" similar to Hovav Shachem's The Geometry of Innocent Flesh on the Bone, which pioneered return-oriented programming but also demonstrated composing gadgets to attacks.
English
1
0
0
11
Ben Livshits
Ben Livshits@convoluted_code·
Are we entering a golden age of software security, or just a faster loop of vulnerability generation? 🧵 RT for visibility. In the last couple of months, I've had many conversations about using LLMs to secure the output of other LLMs.
Ben Livshits tweet media
English
27
0
55
747
Ben Livshits
Ben Livshits@convoluted_code·
3. Diffusion models with constrained decoding are particularly well-suited for this, offering a hierarchical approach to code generation that allows for hard, non-probabilistic security guarantees.
English
2
0
31
244
Ben Livshits đã retweet
pashov
pashov@pashov·
🚨Claude Skills for Smart Contract Security are now here, all thanks to Trail of Bits Plugins for verifying security of audit fixes, scanning for common Critical vulnerabilities, pattern-matching and more. This will boost web3 developers and security researchers by A LOT🚀
pashov tweet media
English
66
129
1.4K
107.3K
Ben Livshits đã retweet
Nethermind Security
Nethermind Security@NethermindSec·
The Nethermind Formal Verification team is introducing CertiPlonk, a framework for extracting Plonky3 constraints & verifying their correctness in @leanprover. CertiPlonk verifies circuit correctness without code changes. Supported by a grant from @ethereumfndn. More info ⬇️
Nethermind Security tweet media
English
13
5
53
2K
Ben Livshits đã retweet
Aniket Kate
Aniket Kate@aniketpkate·
KZG polynomial commitment paper has received the 2025 IACR Test-of-Time award for Asiacrypt 2010!🎉 Greg, Ian, and I are honored and grateful to receive the award. Thanks to @IACR_News for the selection. This award was only possible because of the efforts and interest from many researchers and developers in the blockchain and (zk)snarks space in the last ten years. So, thanks to all who have built their solutions on polynomial commitments. Let's keep making verification succinct!
Aniket Kate tweet media
English
50
52
368
31.6K