Nick Stockton

4.7K posts

Nick Stockton banner
Nick Stockton

Nick Stockton

@cookiechat4_0

Senior editor @aif_media | @WIRED alum | Sherp 31 @nyu_journalism | science & maps & AI & weather

Pittsburgh, PA Katılım Şubat 2011
432 Takip Edilen2.6K Takipçiler
Benjamin Todd
Benjamin Todd@ben_j_todd·
I wrote 80,000 Hours ten years ago because I was frustrated at how terrible career advice can be. Today it’s even worse: still focused on traditional paths like law & medicine when we face AGI. To fix that, Penguin are publishing a fully updated edition.
Benjamin Todd tweet media
English
15
42
274
32.9K
Nick Stockton
Nick Stockton@cookiechat4_0·
I just started @80000Hours where I'll be helping with their (our?) excellent podcast. I'm so stoked to work on creating more like today's awesome episode: youtu.be/Z19UEZHJzAg
YouTube video
YouTube
English
0
0
1
113
Nick Stockton retweetledi
Rob Wiblin
Rob Wiblin@robertwiblin·
What the hell happened with AGI timelines in 2025? Was it just vibes? Faulty analysis? Unexpected technical results? I try to make sense of what drove the wild swings in sentiment: • The great timelines contraction (00:47) • Why timelines went back out again (02:10) • Longstanding reasons AGI could take a long time (11:13) • So what's the upshot of all of these updates? (14:47) • 5 reasons the radical pessimists are still wrong (16:54) • Even long timelines are short now (23:54) (On the 80,000 Hours Podcast, links below.)
English
12
13
185
71.7K
Nick Stockton retweetledi
AI Frontiers
AI Frontiers@ai_frontiers_·
High-bandwidh memory (HBM) is vital to advanced AI. Gaps in U.S. export controls allow China to acquire HBM and the tools to produce it. @iapsAI experts Erich Grunewald and Raghav Akula explain how to close the gaps.
AI Frontiers@ai_frontiers_

x.com/i/article/2018…

English
0
2
4
430
Nick Stockton
Nick Stockton@cookiechat4_0·
It's powerful to actually see frontier models scored by capability, safety, and automation, giving real, data-driven insight into where the biggest risks might be. A new essential tool for staying on top of everything happening in this field. dashboard.safe.ai
English
0
0
0
34
Nick Stockton
Nick Stockton@cookiechat4_0·
AGI debates have long been a mess of shifting definitions and hype. Finally, we have a framework that sets testable boundaries across core human domains and allows us to have grounded discussions about bottlenecks. ai-frontiers.org/articles/agis-…
English
0
0
0
35
David Shapiro (L/0)
David Shapiro (L/0)@DaveShapi·
AI is already having major positive impacts on business and science. AGI is mostly a Rorschach test. It's not a real thing. Empirically and objectively speaking, AI is advancing at break neck pace, having positive impacts and has no immediate or obvious bottlenecks. AGI is a red herring.
English
4
1
39
4.4K
Matthew Berman
Matthew Berman@MatthewBerman·
Karpathy really broke TPOT. He made some super interesting points, especially in his follow up X post. Yes, his AGI timeline is 10 years. Yes, this is way longer than most people I follow on X (except of course, famous bear @GaryMarcus). And Karpathy has the credentials for people to take him seriously. He has been on the front line of AI for a long time, including at companies like DeepMind and Tesla. But let me break down what I took from his podcast and follow up post: 1) We've seen incredible progress over the last few years since ChatGPT was released, but there's still a ton more to be done with scaffolding (memory, tool use, guardrails, etc). This feels right to me. I've been saying scaffolding is all you need for a while. Model overhang has been talked about by leaders at frontier labs. There's a common idea that if model capability froze today, we'd still have years worth of scaffolding to build to extract the full value from current model intelligence. As he pointed out, he's probably in the middle of the AI bulls and bears. 2) He doesn't believe a single AI algo will solve generalization. He says current AI learning isn't like how animals learn, with generations of evolution. They are more like "ghosts," they memorize and mimic rather than truly learn and generalize. He calls these ghosts intelligent, just in a different way from animals. His point makes sense. Humans and other animals are born with tons of "built-in" intelligence. But it's not memorized facts. It's the substrate to learn and generalize. (more on that later) 3) He is bearish on RL! This was the most surprising to me. He says RL has flaws, including both objective reward and process reward strategies. With OR, correct process steps are penalized if the solution is wrong. With PR, incorrect process steps are rewarded if the solution is correct. He also points out that the learning signal per compute is quite bad. It costs a lot of compute to teach the model. However, he IS bullish on agent interaction, meaning allowing AI agents to play in an environment and learn along the way. This is similar to how DeepMind created AlphaGo. Just give AI the ability to learn, nothing memorized, and let it learn through play. This bolsters the position "world simulation" labs such as World Labs (Fei Fei Li) and the Genie team from DeepMind have. Given the lack of physical data (compared to text-based data) there is in the world, seems like self-play in simulated environments is the only way forward for most embodied AI. 4) And last, "Cognitive Core." This is a culmination of the last two points. Karpathy describes Cognitive Core as the process of stripping AI down to its bare minimum. Just enough to learn new things, but not memorizing information. The more information is memorized, the less capable a model is at generalizing. He argues we should be aiming for smaller models, built into every device. I like this argument and in the last few years, we've seen a trend towards highly capable smaller models. In fact, just a couple weeks ago there was a 7m parameter model capable of solving puzzles at a frontier-level. As massive frontier models get better, we can continue to boil them down to their bare essentials (distillation etc.) and in parallel, small models get better. I'm still more optimistic than Karpathy, but it's always good to have my optimism checked by the experts.
English
29
19
347
68.1K