![Ray [REDACTED]](https://pbs.twimg.com/profile_images/2020833070715154432/kaOg1Zon.jpg)
Greatest hack in history with @RayRedacted #samwatson #defcon32 #blackhat
Ray [REDACTED]
54K posts
![Ray [REDACTED]](https://pbs.twimg.com/profile_images/2020833070715154432/kaOg1Zon.jpg)
@RayRedacted
Hacker, Researcher, Podcast Producer (Tribe of Hackers, Darknet Diaries). Proud dad of the fastest climber in the world. Ever. “Ut scandis, alios subleva”
![Ray [REDACTED]](https://pbs.twimg.com/profile_images/2020833070715154432/kaOg1Zon.jpg)
Greatest hack in history with @RayRedacted #samwatson #defcon32 #blackhat
![Ray [REDACTED]](https://pbs.twimg.com/profile_images/2020833070715154432/kaOg1Zon.jpg)
![Ray [REDACTED] tweet media](https://pbs.twimg.com/media/HDI4OWKW0AI0XHT.jpg)

![Ray [REDACTED]](https://pbs.twimg.com/profile_images/2020833070715154432/kaOg1Zon.jpg)
![Ray [REDACTED]](https://pbs.twimg.com/profile_images/2020833070715154432/kaOg1Zon.jpg)
![Ray [REDACTED] tweet media](https://pbs.twimg.com/media/HDAmYvqXwAAB6hG.jpg)
Three days ago I left autoresearch tuning nanochat for ~2 days on depth=12 model. It found ~20 changes that improved the validation loss. I tested these changes yesterday and all of them were additive and transferred to larger (depth=24) models. Stacking up all of these changes, today I measured that the leaderboard's "Time to GPT-2" drops from 2.02 hours to 1.80 hours (~11% improvement), this will be the new leaderboard entry. So yes, these are real improvements and they make an actual difference. I am mildly surprised that my very first naive attempt already worked this well on top of what I thought was already a fairly manually well-tuned project. This is a first for me because I am very used to doing the iterative optimization of neural network training manually. You come up with ideas, you implement them, you check if they work (better validation loss), you come up with new ideas based on that, you read some papers for inspiration, etc etc. This is the bread and butter of what I do daily for 2 decades. Seeing the agent do this entire workflow end-to-end and all by itself as it worked through approx. 700 changes autonomously is wild. It really looked at the sequence of results of experiments and used that to plan the next ones. It's not novel, ground-breaking "research" (yet), but all the adjustments are "real", I didn't find them manually previously, and they stack up and actually improved nanochat. Among the bigger things e.g.: - It noticed an oversight that my parameterless QKnorm didn't have a scaler multiplier attached, so my attention was too diffuse. The agent found multipliers to sharpen it, pointing to future work. - It found that the Value Embeddings really like regularization and I wasn't applying any (oops). - It found that my banded attention was too conservative (i forgot to tune it). - It found that AdamW betas were all messed up. - It tuned the weight decay schedule. - It tuned the network initialization. This is on top of all the tuning I've already done over a good amount of time. The exact commit is here, from this "round 1" of autoresearch. I am going to kick off "round 2", and in parallel I am looking at how multiple agents can collaborate to unlock parallelism. github.com/karpathy/nanoc… All LLM frontier labs will do this. It's the final boss battle. It's a lot more complex at scale of course - you don't just have a single train. py file to tune. But doing it is "just engineering" and it's going to work. You spin up a swarm of agents, you have them collaborate to tune smaller models, you promote the most promising ideas to increasingly larger scales, and humans (optionally) contribute on the edges. And more generally, *any* metric you care about that is reasonably efficient to evaluate (or that has more efficient proxy metrics such as training a smaller network) can be autoresearched by an agent swarm. It's worth thinking about whether your problem falls into this bucket too.


Nicholas Carlini at [un]prompted. If you know Carlini, you know this is a startling claim.





![Ray [REDACTED]](https://pbs.twimg.com/profile_images/2020833070715154432/kaOg1Zon.jpg)
Surround yourself with good people. Pay it forward. Hug your friends every chance you get. Purge toxic people from your life.
![Ray [REDACTED]](https://pbs.twimg.com/profile_images/2020833070715154432/kaOg1Zon.jpg)
![Ray [REDACTED]](https://pbs.twimg.com/profile_images/2020833070715154432/kaOg1Zon.jpg)
🚀 Introducing the Qwen 3.5 Medium Model Series Qwen3.5-Flash · Qwen3.5-35B-A3B · Qwen3.5-122B-A10B · Qwen3.5-27B ✨ More intelligence, less compute. • Qwen3.5-35B-A3B now surpasses Qwen3-235B-A22B-2507 and Qwen3-VL-235B-A22B — a reminder that better architecture, data quality, and RL can move intelligence forward, not just bigger parameter counts. • Qwen3.5-122B-A10B and 27B continue narrowing the gap between medium-sized and frontier models — especially in more complex agent scenarios. • Qwen3.5-Flash is the hosted production version aligned with 35B-A3B, featuring: – 1M context length by default – Official built-in tools 🔗 Hugging Face: huggingface.co/collections/Qw… 🔗 ModelScope: modelscope.cn/collections/Qw… 🔗 Qwen3.5-Flash API: modelstudio.console.alibabacloud.com/ap-southeast-1… Try in Qwen Chat 👇 Flash: chat.qwen.ai/?models=qwen3.… 27B: chat.qwen.ai/?models=qwen3.… 35B-A3B: chat.qwen.ai/?models=qwen3.… 122B-A10B: chat.qwen.ai/?models=qwen3.… Would love to hear what you build with it.
![Ray [REDACTED]](https://pbs.twimg.com/profile_images/2020833070715154432/kaOg1Zon.jpg)
1. @JackRhysider told me SaintCon was absolutely awesome last year. I hear that every year! 2. Sam is in SLC now & could attend too. 3. I'd been meaning to put together a crash course on "AI Basics for Hackers, & Vice Versa." 4. Will be premiering this new talk on Thursday!
![Ray [REDACTED]](https://pbs.twimg.com/profile_images/2020833070715154432/kaOg1Zon.jpg)
Introducing a new tool called "SideChannel". A secure alternative to OpenClaw. Utilizes signal for communication and has Claude integration. I built SideChannel, an open-source Signal bot that connects Claude AI to your entire development workflow. End-to-end encrypted. From your pocket. The real power is autonomous development. Send one message like "Build a REST API with auth, pagination, and tests" and SideChannel will: - Generate a full PRD with stories and atomic tasks. - Dispatch up to 10 parallel workers (each running Claude). - Independently verify every task with a separate Claude context. - Run quality gates to catch regressions - Auto-fix failures. - Send you progress updates via Signal as work completes. Every piece of code is reviewed by a separate AI context using a fail-closed security model. If it detects security issues, backdoors, or logic errors — the code gets rejected automatically. No rubber stamps. It also has memory that actually works. Conversations are stored with vector embeddings for semantic search. Claude remembers your project conventions, past decisions, and what's been tried before. It gets smarter about your codebase over time. Other things I'm proud of: - Plugin framework for extending with custom commands. - Multi-project support with per-user scoping. - Rate limiting, path validation, phone allowlist. - Git checkpoints before every task, atomic commits after. - Stale task recovery, circular dependency detection. - Works on Linux and macOS, one-command install. It also integrates into OpenAI or Grok (optional) for more Generative AI response for simple things like "Whats the weather in New York City right now?". github.com/hackingdave/si…
![Ray [REDACTED]](https://pbs.twimg.com/profile_images/2020833070715154432/kaOg1Zon.jpg)


![Ray [REDACTED]](https://pbs.twimg.com/profile_images/2020833070715154432/kaOg1Zon.jpg)
