Finfox 🦇🔊

909 posts

Finfox 🦇🔊 banner
Finfox 🦇🔊

Finfox 🦇🔊

@Finfox3

Born on a blue day, on the sea coast, with one foot on earth, one on the water and the mind flying in the air.

A far far away galaxy Katılım Mayıs 2018
1.9K Takip Edilen388 Takipçiler
Finfox 🦇🔊 retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
Three days ago I left autoresearch tuning nanochat for ~2 days on depth=12 model. It found ~20 changes that improved the validation loss. I tested these changes yesterday and all of them were additive and transferred to larger (depth=24) models. Stacking up all of these changes, today I measured that the leaderboard's "Time to GPT-2" drops from 2.02 hours to 1.80 hours (~11% improvement), this will be the new leaderboard entry. So yes, these are real improvements and they make an actual difference. I am mildly surprised that my very first naive attempt already worked this well on top of what I thought was already a fairly manually well-tuned project. This is a first for me because I am very used to doing the iterative optimization of neural network training manually. You come up with ideas, you implement them, you check if they work (better validation loss), you come up with new ideas based on that, you read some papers for inspiration, etc etc. This is the bread and butter of what I do daily for 2 decades. Seeing the agent do this entire workflow end-to-end and all by itself as it worked through approx. 700 changes autonomously is wild. It really looked at the sequence of results of experiments and used that to plan the next ones. It's not novel, ground-breaking "research" (yet), but all the adjustments are "real", I didn't find them manually previously, and they stack up and actually improved nanochat. Among the bigger things e.g.: - It noticed an oversight that my parameterless QKnorm didn't have a scaler multiplier attached, so my attention was too diffuse. The agent found multipliers to sharpen it, pointing to future work. - It found that the Value Embeddings really like regularization and I wasn't applying any (oops). - It found that my banded attention was too conservative (i forgot to tune it). - It found that AdamW betas were all messed up. - It tuned the weight decay schedule. - It tuned the network initialization. This is on top of all the tuning I've already done over a good amount of time. The exact commit is here, from this "round 1" of autoresearch. I am going to kick off "round 2", and in parallel I am looking at how multiple agents can collaborate to unlock parallelism. github.com/karpathy/nanoc… All LLM frontier labs will do this. It's the final boss battle. It's a lot more complex at scale of course - you don't just have a single train. py file to tune. But doing it is "just engineering" and it's going to work. You spin up a swarm of agents, you have them collaborate to tune smaller models, you promote the most promising ideas to increasingly larger scales, and humans (optionally) contribute on the edges. And more generally, *any* metric you care about that is reasonably efficient to evaluate (or that has more efficient proxy metrics such as training a smaller network) can be autoresearched by an agent swarm. It's worth thinking about whether your problem falls into this bucket too.
Andrej Karpathy tweet media
English
970
2.1K
19.4K
3.5M
Finfox 🦇🔊 retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
nanochat now trains GPT-2 capability model in just 2 hours on a single 8XH100 node (down from ~3 hours 1 month ago). Getting a lot closer to ~interactive! A bunch of tuning and features (fp8) went in but the biggest difference was a switch of the dataset from FineWeb-edu to NVIDIA ClimbMix (nice work NVIDIA!). I had tried Olmo, FineWeb, DCLM which all led to regressions, ClimbMix worked really well out of the box (to the point that I am slightly suspicious about about goodharting, though reading the paper it seems ~ok). In other news, after trying a few approaches for how to set things up, I now have AI Agents iterating on nanochat automatically, so I'll just leave this running for a while, go relax a bit and enjoy the feeling of post-agi :). Visualized here as an example: 110 changes made over the last ~12 hours, bringing the validation loss so far from 0.862415 down to 0.858039 for a d12 model, at no cost to wall clock time. The agent works on a feature branch, tries out ideas, merges them when they work and iterates. Amusingly, over the last ~2 weeks I almost feel like I've iterated more on the "meta-setup" where I optimize and tune the agent flows even more than the nanochat repo directly.
Andrej Karpathy tweet media
English
337
563
6.5K
604.2K
Finfox 🦇🔊 retweetledi
vLLM
vLLM@vllm_project·
🚀 DeepSeek-OCR — the new frontier of OCR from @deepseek_ai , exploring optical context compression for LLMs, is running blazingly fast on vLLM ⚡ (~2500 tokens/s on A100-40G) — powered by vllm==0.8.5 for day-0 model support. 🧠 Compresses visual contexts up to 20× while keeping 97% OCR accuracy at <10×. 📄 Outperforms GOT-OCR2.0 & MinerU2.0 on OmniDocBench using fewer vision tokens. 🤝 The vLLM team is working with DeepSeek to bring official DeepSeek-OCR support into the next vLLM release — making multimodal inference even faster and easier to scale. 🔗 github.com/deepseek-ai/De… #vLLM #DeepSeek #OCR #LLM #VisionAI #DeepLearning
vLLM tweet mediavLLM tweet mediavLLM tweet media
English
53
369
2.6K
1.5M
Finfox 🦇🔊 retweetledi
L'Étape du Tour de France
L'Étape du Tour de France@letapedutour·
Elle va être difficile, elle va être belle, découvre le teaser de #LEtapeduTour 2026 ! 🔥 La 34e édition aura lieu le 19 juillet 2026 entre l'Isère, la Savoie et les Hautes-Alpes ! 🚵‍♂️ Et on a déjà hâte d’y être 🥰
Français
9
6
76
5.7K
Finfox 🦇🔊
Finfox 🦇🔊@Finfox3·
#f14cb927-b9e3-424f-9fbf-40066a07106c" target="_blank" rel="nofollow noopener">perplexity.ai/page/musk-anno…
ZXX
0
0
0
39
Finfox 🦇🔊
Finfox 🦇🔊@Finfox3·
I asked #GPT5 to make a detailed analysis comparing GPT-5 vs Grok 4. Surprisingly he made a gross confusion between Grok and Claude, considering that Grok 4 was an Anthropic model🫣. As a result, the comparison is totally irrelevant. Embarrassing for a so-called PhD level @OpenAI
English
1
1
2
175
Finfox 🦇🔊 retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
The race for LLM "cognitive core" - a few billion param model that maximally sacrifices encyclopedic knowledge for capability. It lives always-on and by default on every computer as the kernel of LLM personal computing. Its features are slowly crystalizing: - Natively multimodal text/vision/audio at both input and output. - Matryoshka-style architecture allowing a dial of capability up and down at test time. - Reasoning, also with a dial. (system 2) - Aggressively tool-using. - On-device finetuning LoRA slots for test-time training, personalization and customization. - Delegates and double checks just the right parts with the oracles in the cloud if internet is available. It doesn't know that William the Conqueror's reign ended in September 9 1087, but it vaguely recognizes the name and can look up the date. It can't recite the SHA-256 of empty string as e3b0c442..., but it can calculate it quickly should you really want it. What LLM personal computing lacks in broad world knowledge and top tier problem-solving capability it will make up in super low interaction latency (especially as multimodal matures), direct / private access to data and state, offline continuity, sovereignty ("not your weights not your brain"). i.e. many of the same reasons we like, use and buy personal computers instead of having thin clients access a cloud via remote desktop or so.
Omar Sanseviero@osanseviero

I’m so excited to announce Gemma 3n is here! 🎉 🔊Multimodal (text/audio/image/video) understanding 🤯Runs with as little as 2GB of RAM 🏆First model under 10B with @lmarena_ai score of 1300+ Available now on @huggingface, @kaggle, llama.cpp, ai.dev, and more

English
391
1.3K
10.7K
1.3M
Finfox 🦇🔊
Finfox 🦇🔊@Finfox3·
If you haven't seen it yet, explore openaifiles.org, a website compiling internal documents and critiques on OpenAI’s leadership, strategy, and culture to spark debate on transparency, safety, and the company's alignment with its original mission.
English
0
0
0
30
Finfox 🦇🔊
Finfox 🦇🔊@Finfox3·
🧠 #MIT study: how AI chatbots impact our brain activity and change how we think? Dive into the findings based on 4 months of data and what it means for our minds! (Hint: challenge your brain to avoid getting dull) shorturl.at/6YjBX #AI #Neuroscience
English
0
1
1
70
Finfox 🦇🔊
Finfox 🦇🔊@Finfox3·
MiniMax-M1 China's new open source (Apache2.0) LLM (456B params, 1M token context) outperforms DeepSeek R1 and rivals GPT-4o in reasoning, coding, and long-context tasks—at 200x lower training cost. More details: shorturl.at/W4m0e GitHub shorturl.at/wt5dB
English
0
1
1
124
Nao
Nao@NaoThread·
À Saint-Malo, en Bretagne, les marées sont parmi les plus hautes d'Europe, avec des niveaux d'eau allant jusqu'à 13 mètres. Ces maisons font office de digue, avec des fenêtres en façade protégées par quatre couches de verre.
Français
5
23
198
12.8K
Nao
Nao@NaoThread·
Je n'aurais jamais pensé que l'océan pouvait être si beau... ou si effrayant. N’ouvrez pas ce thread si vous avez peur 1. Assis au bord d'une falaise sous-marine.
Nao tweet media
Français
39
209
1.9K
318.8K