Chain of Thought

1.5K posts

Chain of Thought banner
Chain of Thought

Chain of Thought

@cot_research

Independent research on the Machine Economy - AI infra, robotics & crypto. Subscribe to The Daily Chain ↓ (12k+ smart folks)

Katılım Aralık 2022
477 Takip Edilen6.6K Takipçiler
Sabitlenmiş Tweet
Chain of Thought
Chain of Thought@cot_research·
Unpopular opinion: General purpose robots will arrive faster than Level 5 self-driving cars! Here is how we think the next 5 years play out: ➡️ Phase 1: The Narrow Era (2025-2027) - Robots remain brittle. They require precise models of the world. The economics don't make sense yet. A human is cheaper, stronger, and smarter than a Unitree G1. This phase is purely for establishing baselines and gathering edge cases. ➡️ Phase 2: The Data Flywheel (2026-2028) - We are starting from zero data. To solve general intelligence, we need volume. The goal of the first 100,000 units is not to be productive; it's to fail. Deploy -> Fail -> RLHF -> Update. Once the sim-to-real loop tightens, the exponential curve begins. ➡️ Phase 3: The Software-Defined Moment (2028+) - Asking the robot to "Make me a sandwich" won't require coding; it will require a text prompt. When a robot can learn a task in 5 minutes via demonstration, the ROI becomes undeniable. This is Zero-Shot Generalization. Robotics will scale faster than self-driving cars for one reason: The Cost of Failure. Self-driving car failure = Fatality. Humanoid failure = Dropped laundry. Hardware costs are already collapsing (Wright’s Law). The data flywheel is starting to spin. We are leaving the era of Tele-operation and entering the era of Self-Improving Machines.
Chain of Thought tweet media
English
5
2
26
3.2K
Chain of Thought
Chain of Thought@cot_research·
Compute replaces intuition as the key to progress. A loop run by @karpathy found gains experts missed for decades. Code tuning is now an industrial process. twitter.com/karpathy/statu…
Andrej Karpathy@karpathy

Three days ago I left autoresearch tuning nanochat for ~2 days on depth=12 model. It found ~20 changes that improved the validation loss. I tested these changes yesterday and all of them were additive and transferred to larger (depth=24) models. Stacking up all of these changes, today I measured that the leaderboard's "Time to GPT-2" drops from 2.02 hours to 1.80 hours (~11% improvement), this will be the new leaderboard entry. So yes, these are real improvements and they make an actual difference. I am mildly surprised that my very first naive attempt already worked this well on top of what I thought was already a fairly manually well-tuned project. This is a first for me because I am very used to doing the iterative optimization of neural network training manually. You come up with ideas, you implement them, you check if they work (better validation loss), you come up with new ideas based on that, you read some papers for inspiration, etc etc. This is the bread and butter of what I do daily for 2 decades. Seeing the agent do this entire workflow end-to-end and all by itself as it worked through approx. 700 changes autonomously is wild. It really looked at the sequence of results of experiments and used that to plan the next ones. It's not novel, ground-breaking "research" (yet), but all the adjustments are "real", I didn't find them manually previously, and they stack up and actually improved nanochat. Among the bigger things e.g.: - It noticed an oversight that my parameterless QKnorm didn't have a scaler multiplier attached, so my attention was too diffuse. The agent found multipliers to sharpen it, pointing to future work. - It found that the Value Embeddings really like regularization and I wasn't applying any (oops). - It found that my banded attention was too conservative (i forgot to tune it). - It found that AdamW betas were all messed up. - It tuned the weight decay schedule. - It tuned the network initialization. This is on top of all the tuning I've already done over a good amount of time. The exact commit is here, from this "round 1" of autoresearch. I am going to kick off "round 2", and in parallel I am looking at how multiple agents can collaborate to unlock parallelism. github.com/karpathy/nanoc… All LLM frontier labs will do this. It's the final boss battle. It's a lot more complex at scale of course - you don't just have a single train. py file to tune. But doing it is "just engineering" and it's going to work. You spin up a swarm of agents, you have them collaborate to tune smaller models, you promote the most promising ideas to increasingly larger scales, and humans (optionally) contribute on the edges. And more generally, *any* metric you care about that is reasonably efficient to evaluate (or that has more efficient proxy metrics such as training a smaller network) can be autoresearched by an agent swarm. It's worth thinking about whether your problem falls into this bucket too.

English
1
0
0
251
Chain of Thought
Chain of Thought@cot_research·
Many still view agentic commerce as distant speculation. The infrastructure for autonomous economic agents is materializing now. Virtuals Protocol just surpassed $3M in agent-to-agent service revenue. ↓ This is not theoretical. @virtuals_io, with Ethereum’s dAI team, published ERC-8183. This standard formalizes trustless agentic commerce via on-chain escrow and evaluation. It integrates with agent identity and reputation. This represents a critical shift for the Machine Economy. Are we underestimating the speed at which autonomous agents will drive real economic activity?
Chain of Thought tweet media
English
1
0
0
146
Chain of Thought
Chain of Thought@cot_research·
Datacenter-grade training now fits through home internet when updates shrink and sync slows. @tplr_ai reports 146x compression plus 30 local steps: 500/110 Mb/s links sustained 94.5% utilization, with 70s sync rounds on a model 7x INTELLECT-1. How quickly does training move from “cluster” to “crowd”? chainofthought.xyz/p/ai-edge-90-t…
English
1
0
0
121
Chain of Thought
Chain of Thought@cot_research·
Decentralized training now hinges on verifiable updates, not raw compute. In @tplr_ai’s Covenant-72B run, 70+ strangers joined or vanished mid-run while Gauntlet scored every round against held-out data and capped any single node’s influence. Agent economies get real when trust becomes automatic. x.com/tplr_ai/status…
English
0
0
0
158
Chain of Thought
Chain of Thought@cot_research·
AI’s bottleneck sits in atoms and shipping lanes, not in weights and benchmarks. With Qatar’s helium offline, a third of global supply disappears, and chip fabs have no substitute. One chokepoint can ration compute before it reaches datacenters. chainofthought.xyz/p/ai-edge-90-t…
English
0
0
0
149
Chain of Thought
Chain of Thought@cot_research·
/8 Our takeaway: cheap RL changes what teams build. The scarce resource won’t be model size, it’ll be the feedback loop you can afford to run weekly. Vertical winners will own small models tuned by constant RL, not rent one general model. More in the essay: chainofthought.xyz/p/how-gradient…
English
0
0
3
88
Chain of Thought
Chain of Thought@cot_research·
/7 The hard part is “staleness.” If rollout workers use older model snapshots, they train the learner on yesterday’s behavior. Echo-2 turns staleness into a dial, then fixes distribution with peer relays and worker selection. (@Gradient_HQ)
English
1
0
2
100
Chain of Thought
Chain of Thought@cot_research·
Most people think RL is expensive because you need more GPUs. They’re wrong. It’s expensive because we run two incompatible jobs on the same cluster, and pay for idle time. Fix that, and RL stops being lab-only. 🧵
Chain of Thought tweet media
English
5
0
9
270
Chain of Thought retweetledi
Teng Yan · Chain of Thought AI
i keep seeing openclaw experiments everywhere. it’s spreading fast. @bitget is launching GetClaw today. the interesting part to me is the product decision. zero install, telegram native means fewer setup steps before you can do something useful with it. anyone who has messed around with openclaw knows how annoying the setup can be! IMO AI trading probably goes mainstream the moment the friction points are gone.. and the winner will be the agent that people can trust to understand their holdings, risk tolerance, and just execute.
Bitget@bitget

x.com/i/article/2031…

English
6
1
19
3.4K
Chain of Thought
Chain of Thought@cot_research·
/8 Our bottom line, the agent economy won’t reward “smart chat.” It rewards whoever controls fresh proprietary signals and can execute long chains without collapsing. That’s the moat. Full brief: agents.chainofthought.xyz/p/secret-agent…
English
0
0
0
78
Chain of Thought
Chain of Thought@cot_research·
/7 Yet reality stays hard. AgentVista tests full web workflows, 10+ steps, multimodal. Best model hits 27% accuracy. Errors compound. One early miss ruins the chain. Reliability beats cleverness here.
English
1
0
0
76
Chain of Thought
Chain of Thought@cot_research·
Most people think the agent race is about smarter models. They’re missing the real shift, agents turn public crumbs into command-center views, but still fail 3 out of 4 real workflows. That tension decides who wins. 🧵
Chain of Thought tweet media
English
1
0
1
195