Nat Friedman

5.7K posts

Nat Friedman

Nat Friedman

@natfriedman

https://t.co/Lhh178sIjq

California Katılım Şubat 2008
833 Takip Edilen281.5K Takipçiler
Nat Friedman retweetledi
Nat Friedman
Nat Friedman@natfriedman·
Future shock, from here out.
English
23
39
509
151.6K
Nat Friedman
Nat Friedman@natfriedman·
@ChrisPainterYup I'm not done yet. Just a little distracted by the singularity right now
English
7
3
238
18.6K
Chris Painter
Chris Painter@ChrisPainterYup·
Now that Nat Friedman is inside of a big company, who will step into his role of initiating and coordinating whimsical science/history projects?
English
11
3
151
26K
Nat Friedman
Nat Friedman@natfriedman·
Started work at Meta this week. My job is to make amazing AI products that billions of people love to use. It won't happen overnight, but a few days in, I'm feeling confident that great things are ahead.
English
319
120
6.1K
899.8K
Nat Friedman
Nat Friedman@natfriedman·
@kevreuss Right now I'm just trying to meet everyone here and see what they're doing.
English
3
0
37
11.6K
Kev
Kev@kevreuss·
@natfriedman How do you even start the mission of „make amazing AI products that billions of people love to use“ ? What are the things you do in week 1, 2, 3, 4 ? I always wondered how to start with sth like this.
English
2
0
13
15.2K
Emily
Emily@IamEmily2050·
@natfriedman Yes, please, let's have the best image and video model. I am sure everyone will love SOTA 🙏🙏
English
1
0
9
36.1K
Marvin von Hagen
Marvin von Hagen@marvinvonhagen·
@natfriedman can you publicly promise to continue doing cool stuff like plasticlist and vesuvius 🙏🏼
English
2
0
95
43.7K
Nat Friedman
Nat Friedman@natfriedman·
Civilization is constant maintenance
English
49
85
900
140.7K
Nat Friedman
Nat Friedman@natfriedman·
USA: passing 1000 state bills to slow down AI China: passing american AI on leaderboards
Artificial Analysis@ArtificialAnlys

DeepSeek’s R1 leaps over xAI, Meta and Anthropic to be tied as the world’s #2 AI Lab and the undisputed open-weights leader DeepSeek R1 0528 has jumped from 60 to 68 in the Artificial Analysis Intelligence Index, our index of 7 leading evaluations that we run independently across all leading models. That’s the same magnitude of increase as the difference between OpenAI’s o1 and o3 (62 to 70). This positions DeepSeek R1 as higher intelligence than xAI’s Grok 3 mini (high), NVIDIA’s Llama Nemotron Ultra, Meta’s Llama 4 Maverick, Alibaba’s Qwen 3 253 and equal to Google’s Gemini 2.5 Pro. Breakdown of the model’s improvement: 🧠 Intelligence increases across the board: Biggest jumps seen in AIME 2024 (Competition Math, +21 points), LiveCodeBench (Code generation, +15 points), GPQA Diamond (Scientific Reasoning, +10 points) and Humanity’s Last Exam (Reasoning & Knowledge, +6 points) 🏠 No change to architecture: R1-0528 is a post-training update with no change to the V3/R1 architecture - it remains a large 671B model with 37B active parameters 🧑‍💻 Significant leap in coding skills: R1 is now matching Gemini 2.5 Pro in the Artificial Analysis Coding Index and is behind only o4-mini (high) and o3 🗯️ Increased token usage: R1-0528 used 99 million tokens to complete the evals in Artificial Analysis Intelligence Index, 40% more than the original R1’s 71 million tokens - ie. the new R1 thinks for longer than the original R1. This is still not the highest token usage number we have seen: Gemini 2.5 Pro is using 30% more tokens than R1-0528 Takeaways for AI: 👐 The gap between open and closed models is smaller than ever: open weights models have continued to maintain intelligence gains in-line with proprietary models. DeepSeek’s R1 release in January was the first time an open-weights model achieved the #2 position and DeepSeek’s R1 update today brings it back to the same position 🇨🇳 China remains neck and neck with the US: models from China-based AI Labs have all but completely caught up to their US counterparts, this release continues the emerging trend. As of today, DeepSeek leads US based AI labs including Anthropic and Meta in Artificial Analysis Intelligence Index 🔄 Improvements driven by reinforcement learning: DeepSeek has shown substantial intelligence improvements with the same architecture and pre-train as their original DeepSeek R1 release. This highlights the continually increasing importance of post-training, particularly for reasoning models trained with reinforcement learning (RL) techniques. OpenAI disclosed a 10x scaling of RL compute between o1 and o3 - DeepSeek have just demonstrated that so far, they can keep up with OpenAI’s RL compute scaling. Scaling RL demands less compute than scaling pre-training and offers an efficient way of achieving intelligence gains, supporting AI Labs with fewer GPUs See further analysis below 👇

English
45
113
955
207.8K
Nat Friedman
Nat Friedman@natfriedman·
Anarchotyranny, but at the nation state level
English
3
6
139
56.1K
archivedvideos
archivedvideos@archived_videos·
@natfriedman True but also currently a drunk boar could leap over meta and I don't think the problem is regulations
English
1
0
9
1.8K
Nat Friedman
Nat Friedman@natfriedman·
@danielrruf He's a full time employee of Vesuvius Challenge and therefore not eligible for prizes
English
0
0
24
5.5K
Daniel Riaño
Daniel Riaño@danielrruf·
@natfriedman That’s Great! I was wondering why Sean Johnson didn’t get the same price ex aequo?
English
1
0
5
5.9K
Nat Friedman
Nat Friedman@natfriedman·
We found the title of a scroll for the first time! This cylinder of charcoal turns out to be "On Vices, Book 1" by Philodemus
Nat Friedman tweet mediaNat Friedman tweet media
English
157
585
7.5K
658.4K