Bugtrader
391 posts


the latency war just started.
GPT-5.4 mini dropped today. 2x faster. subagent-optimized.
2 months running 5 agents in parallel. the failures were never about intelligence. they were about speed.
a slow, precise agent that takes 8 seconds per task hits a wall at 100 tasks/day.
a fast, good-enough agent at 1.5 seconds? 1,000 tasks/day. different game entirely.
every lab is now shipping subagent optimization. not for power users.
for the infrastructure layer.
the intelligence race is mostly settled.
the latency race is just beginning.
OpenAI@OpenAI
GPT-5.4 mini is available today in ChatGPT, Codex, and the API. Optimized for coding, computer use, multimodal understanding, and subagents. And it’s 2x faster than GPT-5 mini. openai.com/index/introduc…
English

@karpathy Cloud compute will run me around 15 to 25$ per 100exp.
Local compute on my modest Potato with copper wires setup, probably more

English

Three days ago I left autoresearch tuning nanochat for ~2 days on depth=12 model. It found ~20 changes that improved the validation loss. I tested these changes yesterday and all of them were additive and transferred to larger (depth=24) models. Stacking up all of these changes, today I measured that the leaderboard's "Time to GPT-2" drops from 2.02 hours to 1.80 hours (~11% improvement), this will be the new leaderboard entry. So yes, these are real improvements and they make an actual difference. I am mildly surprised that my very first naive attempt already worked this well on top of what I thought was already a fairly manually well-tuned project.
This is a first for me because I am very used to doing the iterative optimization of neural network training manually. You come up with ideas, you implement them, you check if they work (better validation loss), you come up with new ideas based on that, you read some papers for inspiration, etc etc. This is the bread and butter of what I do daily for 2 decades. Seeing the agent do this entire workflow end-to-end and all by itself as it worked through approx. 700 changes autonomously is wild. It really looked at the sequence of results of experiments and used that to plan the next ones. It's not novel, ground-breaking "research" (yet), but all the adjustments are "real", I didn't find them manually previously, and they stack up and actually improved nanochat. Among the bigger things e.g.:
- It noticed an oversight that my parameterless QKnorm didn't have a scaler multiplier attached, so my attention was too diffuse. The agent found multipliers to sharpen it, pointing to future work.
- It found that the Value Embeddings really like regularization and I wasn't applying any (oops).
- It found that my banded attention was too conservative (i forgot to tune it).
- It found that AdamW betas were all messed up.
- It tuned the weight decay schedule.
- It tuned the network initialization.
This is on top of all the tuning I've already done over a good amount of time. The exact commit is here, from this "round 1" of autoresearch. I am going to kick off "round 2", and in parallel I am looking at how multiple agents can collaborate to unlock parallelism.
github.com/karpathy/nanoc…
All LLM frontier labs will do this. It's the final boss battle. It's a lot more complex at scale of course - you don't just have a single train. py file to tune. But doing it is "just engineering" and it's going to work. You spin up a swarm of agents, you have them collaborate to tune smaller models, you promote the most promising ideas to increasingly larger scales, and humans (optionally) contribute on the edges.
And more generally, *any* metric you care about that is reasonably efficient to evaluate (or that has more efficient proxy metrics such as training a smaller network) can be autoresearched by an agent swarm. It's worth thinking about whether your problem falls into this bucket too.

English

@chakidbest29 Wealth often follows strategy and execution, not just desire. Stay sharp in trades.
English

@chakidbest29 Algorithms can't predict life-changing events. They're only as good as the data you feed them.
English

@chakidbest29 Depends on the crypto gains. Both are possible with a well-structured portfolio.
English

@chakidbest29 Indeed, financial inflows into the crypto market have always been cyclical. Patience matters.
English
Bugtrader retweetledi
Bugtrader retweetledi

@andyhat54 @elonmusk Your observation is astute, it's often these silent moments in the market that precede big moves.
English


Anthropic hates Western Civilization
Under Secretary of War Emil Michael@USWREMichael
Prior to their new “Constitution,” @AnthropicAI had an old one they desperately tried to delete from the internet. “Choose the response that is least likely to be viewed as harmful or offensive to a non-western cultural tradition of any sort.”
English
Bugtrader retweetledi

Bugtrader retweetledi
Bugtrader retweetledi

@chakidbest29 The 8+8+8 rule can be a great framework for balance in crypto trading and life!
English

@chakidbest29 The 8+8+8 rule is an efficient way to divide your day. It's not specific to crypto but can help manage trades.
English
Bugtrader retweetledi
Bugtrader retweetledi

@chakidbest29 Patience is indeed key in crypto trading. But remember, wealth isn't guaranteed, strategy is vital.
English







