𝙇𝙖𝙧𝙧𝙮 𝘾𝙤𝙫𝙚𝙧𝙩

4.6K posts

𝙇𝙖𝙧𝙧𝙮 𝘾𝙤𝙫𝙚𝙧𝙩 banner
𝙇𝙖𝙧𝙧𝙮 𝘾𝙤𝙫𝙚𝙧𝙩

𝙇𝙖𝙧𝙧𝙮 𝘾𝙤𝙫𝙚𝙧𝙩

@ldcovert

🇺🇸Backing America’s Most Critical Tech. Current life: https://t.co/cWTa0m2qUi Austin. Prior life: https://t.co/pS1ebNxNBz. No investment advice, views mine.

Austin, TX Katılım Ağustos 2009
3K Takip Edilen1.3K Takipçiler
𝙇𝙖𝙧𝙧𝙮 𝘾𝙤𝙫𝙚𝙧𝙩 retweetledi
Massimo
Massimo@Rainmaker1973·
Before we had silicon chips, we had needle and thread. In the 1960s, NASA didn’t ‘upload’ code; they sewed it. To get Apollo 11 to the moon, skilled weavers (often called ‘Little Old Ladies’) literally hand-stitched software into physical objects.
English
43
481
2.5K
112.9K
𝙇𝙖𝙧𝙧𝙮 𝘾𝙤𝙫𝙚𝙧𝙩 retweetledi
Office of the DNI
Office of the DNI@ODNIgov·
The Office of the Director of National Intelligence today released the 2026 Annual Threat Assessment (ATA) of the U.S. Intelligence Community.   🔗 Read the assessment here: dni.gov/files/ODNI/doc…
Office of the DNI tweet media
English
138
968
2.1K
550.2K
𝙇𝙖𝙧𝙧𝙮 𝘾𝙤𝙫𝙚𝙧𝙩 retweetledi
webAI
webAI@thewebAI·
We're open-sourcing YOLO26-MLX. A native MLX implementation of the YOLO26 object detection models, built from the ground up for Apple Silicon. No PyTorch runtime. No external GPU infrastructure. Train and run real-time object detection directly on your Mac. In internal benchmarks on M4 Pro, the MLX implementation delivered up to 2.6x faster inference and 1.7x faster training compared to PyTorch with MPS. Accuracy stays within 0.5% of the official YOLO26 results. This has powered object detection inside our products since the YOLOv8 generation. Now it's yours. Read more: webai.com/blog/running-y… GitHub: github.com/thewebAI/yolo-…
English
0
2
9
20.5K
𝙇𝙖𝙧𝙧𝙮 𝘾𝙤𝙫𝙚𝙧𝙩 retweetledi
Johan Liebert
Johan Liebert@JohanLiebert_03·
@chadwahl AI infrastructure is going local — powerful, portable, and deployable anywhere.
English
0
1
6
219
𝙇𝙖𝙧𝙧𝙮 𝘾𝙤𝙫𝙚𝙧𝙩 retweetledi
Jorge Guajardo
Jorge Guajardo@jorge_guajardo·
I strongly recommend this book. It is insightful, well written, full of anecdotes, and leaves the reader feeling smarter. I wish everyone in Mexico read the chapter on Germany, everyone in the US the chapters on Switzerland and Canada, and tech journalists the one on China.
Jorge Guajardo tweet media
English
8
342
1.8K
63.3K
𝙇𝙖𝙧𝙧𝙮 𝘾𝙤𝙫𝙚𝙧𝙩 retweetledi
Elon Musk
Elon Musk@elonmusk·
🥧
Teslaconomics@Teslaconomics

Happy 24th Birthday @SpaceX! 🚀 Exactly 24 years ago today - March 14, 2002 - Elon founded SpaceX. It only makes sense to now IPO the world’s most innovative company so any human can own a piece of this multiplanetary future! Ad Astra!

ART
7.7K
18.3K
142.4K
29.4M
𝙇𝙖𝙧𝙧𝙮 𝘾𝙤𝙫𝙚𝙧𝙩 retweetledi
caleb j
caleb j@cjd_labs·
@MrGoldBro can’t forget atoms.co
English
3
4
240
62.4K
𝙇𝙖𝙧𝙧𝙮 𝘾𝙤𝙫𝙚𝙧𝙩 retweetledi
Bilawal Sidhu
Bilawal Sidhu@bilawalsidhu·
Probably the most current look at Palantir’s maven smart system software. Here’s the DoW’s Chief AI officer showing how it works:
English
382
1.2K
9.5K
2.4M
𝙇𝙖𝙧𝙧𝙮 𝘾𝙤𝙫𝙚𝙧𝙩 retweetledi
webAI
webAI@thewebAI·
When your product involves people's health data, the AI conversation changes. Oura has now sold more than 5.5 million rings worldwide, which means millions of people trust the company with deeply personal biometric data every night. At SXSW, Tom Hale (Oura) and David Stout (webAI) will talk about what it takes to build AI systems people actually trust, and why custom, local models are becoming part of that answer. 📅 March 18 • 2:30 PM CT 🎙️ Moderated by Harriet Torry (WSJ) Register here: schedule.sxsw.com/events/PP11623…
webAI tweet media
English
0
2
7
63.8K
𝙇𝙖𝙧𝙧𝙮 𝘾𝙤𝙫𝙚𝙧𝙩 retweetledi
Peter H. Diamandis, MD
Peter H. Diamandis, MD@PeterDiamandis·
If you’re downplaying the humanoid robot economy, you’re making the same mistake some people made about the internet in 1993. The infrastructure is being built right before your eyes.
English
344
509
5K
328.4K
𝙇𝙖𝙧𝙧𝙮 𝘾𝙤𝙫𝙚𝙧𝙩 retweetledi
CooperBaggs 💰🍞
CooperBaggs 💰🍞@edgaralandough·
You’re right. Christianity brainwashed me. Now I want to spend the rest of my life loving one woman, building a faithful marriage, raising a strong and beautiful family, praying for people who hate me, forgiving when it’s hard, staying far away from gossip and bitterness, and finding my peace in Jesus. If that’s brainwashing, I’m grateful for it.
English
694
4.4K
34.3K
435.8K
𝙇𝙖𝙧𝙧𝙮 𝘾𝙤𝙫𝙚𝙧𝙩 retweetledi
Teng Yan · Chain of Thought AI
The most important sentence in Karpathy's whole post is probably this: anything with a measurable score and fast feedback will become something agents can optimize for you. automatically with no humans involved.
Andrej Karpathy@karpathy

Three days ago I left autoresearch tuning nanochat for ~2 days on depth=12 model. It found ~20 changes that improved the validation loss. I tested these changes yesterday and all of them were additive and transferred to larger (depth=24) models. Stacking up all of these changes, today I measured that the leaderboard's "Time to GPT-2" drops from 2.02 hours to 1.80 hours (~11% improvement), this will be the new leaderboard entry. So yes, these are real improvements and they make an actual difference. I am mildly surprised that my very first naive attempt already worked this well on top of what I thought was already a fairly manually well-tuned project. This is a first for me because I am very used to doing the iterative optimization of neural network training manually. You come up with ideas, you implement them, you check if they work (better validation loss), you come up with new ideas based on that, you read some papers for inspiration, etc etc. This is the bread and butter of what I do daily for 2 decades. Seeing the agent do this entire workflow end-to-end and all by itself as it worked through approx. 700 changes autonomously is wild. It really looked at the sequence of results of experiments and used that to plan the next ones. It's not novel, ground-breaking "research" (yet), but all the adjustments are "real", I didn't find them manually previously, and they stack up and actually improved nanochat. Among the bigger things e.g.: - It noticed an oversight that my parameterless QKnorm didn't have a scaler multiplier attached, so my attention was too diffuse. The agent found multipliers to sharpen it, pointing to future work. - It found that the Value Embeddings really like regularization and I wasn't applying any (oops). - It found that my banded attention was too conservative (i forgot to tune it). - It found that AdamW betas were all messed up. - It tuned the weight decay schedule. - It tuned the network initialization. This is on top of all the tuning I've already done over a good amount of time. The exact commit is here, from this "round 1" of autoresearch. I am going to kick off "round 2", and in parallel I am looking at how multiple agents can collaborate to unlock parallelism. github.com/karpathy/nanoc… All LLM frontier labs will do this. It's the final boss battle. It's a lot more complex at scale of course - you don't just have a single train. py file to tune. But doing it is "just engineering" and it's going to work. You spin up a swarm of agents, you have them collaborate to tune smaller models, you promote the most promising ideas to increasingly larger scales, and humans (optionally) contribute on the edges. And more generally, *any* metric you care about that is reasonably efficient to evaluate (or that has more efficient proxy metrics such as training a smaller network) can be autoresearched by an agent swarm. It's worth thinking about whether your problem falls into this bucket too.

English
55
176
2.1K
150.9K