Will Manning

391 posts

Will Manning banner
Will Manning

Will Manning

@willmanning

Husband & dad (x2), rugged individualist, co-founder/CEO @SpiralDB, project chair @vortexdotdev

New York, NY Katılım Mayıs 2009
1.6K Takip Edilen580 Takipçiler
Will Manning retweetledi
Pratyush Maini
Pratyush Maini@pratyushmaini·
If I had to compress my PhD into one idea, it is this "The data a model sees early in training leaves an imprint on its representations that is very hard to undo later" This thread runs through - Rephrasing the Web - Safety Pretraining - TOFU This is the Finetuner’s Fallacy🧵
English
20
50
636
44.2K
Will Manning retweetledi
Roman Helmet Guy
Roman Helmet Guy@romanhelmetguy·
Warning: Do not adopt any new code editors this month. Beware the IDEs of March.
English
118
589
7.9K
376.5K
Will Manning
Will Manning@willmanning·
Fabulous write-up on world modeling. As a side note, I personally loved learning about #5, which feels like a throwback to methods from the early 2010s (before everyone became scaling-pilled)
Zhuokai Zhao@zhuokaiz

AMI Labs just raised $1.03B. World Labs raised $1B a few weeks earlier. Both are betting on world models. But almost nobody means the same thing by that term. Here are, in my view, five categories of world models. --- 1. Joint Embedding Predictive Architecture (JEPA) Representatives: AMI Labs (@ylecun), V-JEPA 2 The central bet here is that pixel reconstruction alone is an inefficient objective for learning the abstractions needed for physical understanding. LeCun has been saying this for years — predicting every pixel of the future is intractable in any stochastic environment. JEPA sidesteps this by predicting in a learned latent space instead. Concretely, JEPA trains an encoder that maps video patches to representations, then a predictor that forecasts masked regions in that representation space — not in pixel space. This is a crucial design choice. A generative model that reconstructs pixels is forced to commit to low-level details (exact texture, lighting, leaf position) that are inherently unpredictable. By operating on abstract embeddings, JEPA can capture "the ball will fall off the table" without having to hallucinate every frame of it falling. V-JEPA 2 is the clearest large-scale proof point so far. It's a 1.2B-parameter model pre-trained on 1M+ hours of video via self-supervised masked prediction — no labels, no text. The second training stage is where it gets interesting: just 62 hours of robot data from the DROID dataset is enough to produce an action-conditioned world model that supports zero-shot planning. The robot generates candidate action sequences, rolls them forward through the world model, and picks the one whose predicted outcome best matches a goal image. This works on objects and environments never seen during training. The data efficiency is the real technical headline. 62 hours is almost nothing. It suggests that self-supervised pre-training on diverse video can bootstrap enough physical prior knowledge that very little domain-specific data is needed downstream. That's a strong argument for the JEPA design — if your representations are good enough, you don't need to brute-force every task from scratch. AMI Labs is LeCun's effort to push this beyond research. They're targeting healthcare and robotics first, which makes sense given JEPA's strength in physical reasoning with limited data. But this is a long-horizon bet — their CEO has openly said commercial products could be years away. --- 2. Spatial Intelligence (3D World Models) Representative: World Labs (@drfeifei) Where JEPA asks "what will happen next," Fei-Fei Li's approach asks "what does the world look like in 3D, and how can I build it?" The thesis is that true understanding requires explicit spatial structure — geometry, depth, persistence, and the ability to re-observe a scene from novel viewpoints — not just temporal prediction. This is a different bet from JEPA: rather than learning abstract dynamics, you learn a structured 3D representation of the environment that you can manipulate directly. Their product Marble generates persistent 3D environments from images, text, video, or 3D layouts. "Persistent" is the key word — unlike a video generation model that produces a linear sequence of frames, Marble's outputs are actual 3D scenes with spatial coherence. You can orbit the camera, edit objects, export meshes. This puts it closer to a 3D creation tool than to a predictive model, which is deliberate. For context, this builds on a lineage of neural 3D representation work (NeRFs, 3D Gaussian Splatting) but pushes toward generation rather than reconstruction. Instead of capturing a real scene from multi-view photos, Marble synthesizes plausible new scenes from sparse inputs. The challenge is maintaining physical plausibility — consistent geometry, reasonable lighting, sensible occlusion — across a generated world that never existed. --- 3. Learned Simulation (Generative Video + Latent-Space RL) Representatives: Google DeepMind (Genie 3, Dreamer V3/V4), Runway GWM-1 This category groups two lineages that are rapidly converging: generative video models that learn to simulate interactive worlds, and RL agents that learn world models to train policies in imagination. The video generation lineage. DeepMind's Genie 3 is the purest version — text prompt in, navigable environment out, 24 fps at 720p, with consistency for a few minutes. Rather than relying on an explicit hand-built simulator, it learns interactive dynamics from data. The key architectural property is autoregressive generation conditioned on user actions: each frame is generated based on all previous frames plus the current input (move left, look up, etc.). This means the model must maintain an implicit spatial memory — turn away from a tree and turn back, and it needs to still be there. DeepMind reports consistency up to about a minute, which is impressive but still far from what you'd need for sustained agent training. Runway's GWM-1 takes a similar foundation — autoregressive frame prediction built on Gen-4.5 — but splits into three products: Worlds, Robotics, and Avatars. The split into Worlds / Avatars / Robotics suggests the practical generality problem is still being decomposed by action space and use case. The RL lineage. The Dreamer series has the longer intellectual history. The core idea is clean: learn a latent dynamics model from observations, then roll out imagined trajectories in latent space and optimize a policy via backpropagation through the model's predictions. The agent never needs to interact with the real environment during policy learning. Dreamer V3 was the first AI to get diamonds in Minecraft without human data. Dreamer 4 did the same purely offline — no environment interaction at all. Architecturally, Dreamer 4 moves from Dreamer’s earlier recurrent-style lineage to a more scalable transformer-based world-model recipe, and introduced "shortcut forcing" — a training objective that lets the model jump from noisy to clean predictions in just 4 steps instead of the 64 typical in diffusion models. This is what makes real-time inference on a single H100 possible. These two sub-lineages used to feel distinct: video generation produces visual environments, while RL world models produce trained policies. But Dreamer 4 blurred the line — humans can now play inside its world model interactively, and Genie 3 is being used to train DeepMind's SIMA agents. The convergence point is that both need the same thing: a model that can accurately simulate how actions affect environments over extended horizons. The open question for this whole category is one LeCun keeps raising: does learning to generate pixels that look physically correct actually mean the model understands physics? Or is it pattern-matching appearance? Dreamer 4's ability to get diamonds in Minecraft from pure imagination is a strong empirical counterpoint, but it's also a game with discrete, learnable mechanics — the real world is messier. --- 4. Physical AI Infrastructure (Simulation Platform) Representative: NVIDIA Cosmos NVIDIA's play is don't build the world model, build the platform everyone else uses to build theirs. Cosmos launched at CES January 2025 and covers the full stack — data curation pipeline (process 20M hours of video in 14 days on Blackwell, vs. 3+ years on CPU), a visual tokenizer with 8x better compression than prior SOTA, model training via NeMo, and deployment through NIM microservices. The pre-trained world foundation models are trained on 9,000 trillion tokens from 20M hours of real-world video spanning driving, industrial, robotics, and human activity data. They come in two architecture families: diffusion-based (operating on continuous latent tokens) and autoregressive transformer-based (next-token prediction on discretized tokens). Both can be fine-tuned for specific domains. Three model families sit on top of this. Predict generates future video states from text, image, or video inputs — essentially video forecasting that can be post-trained for specific robot or driving scenarios. Transfer handles sim-to-real domain adaptation, which is one of the persistent headaches in physical AI — your model works great in simulation but breaks in the real world due to visual and dynamics gaps. Reason (added at GTC 2025) brings chain-of-thought reasoning over physical scenes — spatiotemporal awareness, causal understanding of interactions, video Q&A. --- 5. Active Inference Representative: VERSES AI (Karl Friston) This is the outlier on the list — not from the deep learning tradition at all, but from computational neuroscience. Karl Friston's Free Energy Principle says intelligent systems continuously generate predictions about their environment and act to minimize surprise (technically: variational free energy, an upper bound on surprise). Where standard RL is usually framed around reward maximization, active inference frames behavior as minimizing variational / expected free energy, which blends goal-directed preferences with epistemic value. This leads to natural exploration behavior: the agent is drawn to situations where it's uncertain, because resolving uncertainty reduces free energy. VERSES built AXIOM (Active eXpanding Inference with Object-centric Models) on this foundation. The architecture is fundamentally different from neural network world models. Instead of learning a monolithic function approximator, AXIOM maintains a structured generative model where each entity in the environment is a discrete object with typed attributes and relations. Inference is Bayesian — beliefs are probability distributions that get updated via message passing, not gradient descent. This makes it interpretable (you can inspect what the agent believes about each object), compositional (add a new object type without retraining), and extremely data-efficient. In their robotics work, they've shown a hierarchical multi-agent setup where each joint of a robot arm is its own active inference agent. The joint-level agents handle local motor control while higher-level agents handle task planning, all coordinating through shared beliefs in a hierarchy. The whole system adapts in real time to unfamiliar environments without retraining — you move the target object and the agent re-plans immediately, because it's doing online inference, not executing a fixed policy. They shipped a commercial product (Genius) in April 2025, and the AXIOM benchmarks against RL baselines are competitive on standard control tasks while using orders of magnitude less data. --- imo, these five categories aren't really competing — they're solving different sub-problems. JEPA compresses physical understanding. Spatial intelligence reconstructs 3D structure. Learned simulation trains agents through generated experience. NVIDIA provides the picks and shovels. Active inference offers a fundamentally different computational theory of intelligence. My guess is the lines between them blur fast.

English
0
0
4
663
Will Manning retweetledi
Kelsey Piper
Kelsey Piper@KelseyTuoc·
My ancestors buried half their children. All mine are alive. My ancestors' house had a dirt floor. Mine is wood. I have indoor plumbing, I have hot water, I have never in my life hauled a full bucket half a mile and I probably never will. Do you know how rare it is, in human history, for small children to wear shoes? Mine have multiple pairs. I can speak to my relatives who live thousands of miles away, for free, at any time. Video, if we want video. With machine translation, if we speak different languages. The original Library of Congress had 740 books in it. I have more than that. If I run out of books in my home my local public library has 350,000. If I want to take a hundred books with me on vacation, they all fit on a device that fits in my purse. I have heat in the winter and AC in the summer and a washing machine and I have never, ever, ever had to scrub a dress clean by hand in the stream. I can look up recipes from more than a hundred different countries and I've tried dozens of them. I ride a clean and modern train across my city for $4, or take a robot taxi if I'm out too late for the train. I donate $40,000 every year to the cause of getting healthcare to the world's poorest people and even after the donations I never have to think about whether I can afford a book, or a pair of shoes, or a cup of coffee. There is a great deal more to fight for, of course. I hope that our descendants will look back on our lives and list a thousand ways they're richer. Maybe we ourselves will do that, if some of the crazier stuff comes true. But the abundance is all around you and to a significant degree you aren't feeling it only because fish don't notice water.
English
86
845
6.6K
349.9K
Will Manning retweetledi
Packy McCormick
Packy McCormick@packyM·
Ben Thompson with the best take on DOD v. Anthropic, which is basically: if you don't want the government to treat your technology like nuclear weapons, stop comparing your technology to nuclear weapons. Hype Tax.
Packy McCormick tweet media
English
87
208
2.3K
274.9K
Will Manning retweetledi
SwiftOnSecurity
SwiftOnSecurity@SwiftOnSecurity·
Fun fact: Computer security has a famous 2014 paper on how dramatically different assumptions and practices must be when dealing with the most motivated attacker in the world, who is after -YOU-. This is, -literally-, known "Mossad vs not-Mossad." Note the phone replacement.
SwiftOnSecurity tweet media
English
91
953
11.7K
535.7K
Will Manning retweetledi
The Free Press
The Free Press@TheFP·
While Iranians danced in the streets, major Western outlets dressed up Ayatollah Khamenei as a statesman instead of what he was: the architect of decades of terror at home and abroad, writes Maya Sulkin. thefp.com/p/the-posthumo…
English
959
1.3K
4.3K
74.9K
Will Manning retweetledi
SwiftOnSecurity
SwiftOnSecurity@SwiftOnSecurity·
Just think, we could have built two miles of high speed rail for what this is going to cost us
English
210
843
12.2K
368.6K
Will Manning retweetledi
Under Secretary of State Sarah B. Rogers
This is a really important point. There are a lot of instances where the government and its AI provider—and US law—concur on what ought to be out-of-bounds. Mass domestic surveillance is one obvious example! But the contractor can’t have procedural carte blanche to cut the cord if there’s a dispute.
Senior Official Jeremy Lewin@UnderSecretaryF

This isn’t about Anthropic or the specific conditions at issue. It’s about the broader premise that technology deeply embedded in our military must be under the exclusive control of our duly elected/appointed leaders. No private company can dictate normative terms of use—which can change and are subject to interpretation—for our most sensitive national security systems. The @DeptofWar obviously can’t trust a system a private company can switch off at any moment.

English
108
147
1.1K
156.4K
Will Manning
Will Manning@willmanning·
@paulgb 2027: Grok Shitpost is the fastest growing ARR product of all time 😅
English
0
0
0
46
Paul Butler
Paul Butler@paulgb·
In the future the only moat for pure software companies will be how good the CEO is at shitposting.
English
26
5
111
25.5K
Andrew Lamb
Andrew Lamb@andrewlamb1111·
It came up on the Parquet sync today if anyone has practical experience with comparing FastLanes encoding vs "classic" bit packing (without the transposed/reshuffled layouts). If you have would love to know your experience
English
4
1
20
4.6K
Phil Eaton
Phil Eaton@eatonphil·
I started a software research company
Phil Eaton tweet media
English
57
66
1.1K
47.9K