پن کیا گیا ٹویٹ
Nate Gillman
164 posts

Nate Gillman
@GillmanLab
ML researcher, interning @Google, PhD-ing @BrownUniversity. I train deep generative models
شامل ہوئے Ağustos 2021
453 فالونگ810 فالوورز

We've released our code (Wan2.2+ControlNet), synthetic training datasets, and model weights, to help build the next generation of these physically-aware interactive world models.
Explore the code and try the interactive demos on our project page!
goal-force.github.io (n/n)
English

This is a joint project with @zitian_tang @dakshces @mik3fr33man + Yinghua, Evan, Arjan, Charles, and advised by @jesu9 at @BrownUniversity. Collaboration between Brown and @Cornell (9/n)
English
Nate Gillman ری ٹویٹ کیا

📢Current world models aren't really modeling the world; they're modeling one agent's view of it. Partial observations ≠ world state.
Future world models will be independent of any one agent's perspective. You will be able to “drop in” any number of agents at any point in time, and a persistent world state will evolve with their interactions. Imagine a neural MMORPG server. 🧵[1/10]
English
Nate Gillman ری ٹویٹ کیا

We trained a humanoid with 22-DoF dexterous hands to assemble model cars, operate syringes, sort poker cards, fold/roll shirts, all learned primarily from 20,000+ hours of egocentric human video with no robot in the loop.
Humans are the most scalable embodiment on the planet. We discovered a near-perfect log-linear scaling law (R² = 0.998) between human video volume and action prediction loss, and this loss directly predicts real-robot success rate.
Humanoid robots will be the end game, because they are the practical form factor with minimal embodiment gap from humans. Call it the Bitter Lesson of robot hardware: the kinematic similarity lets us simply retarget human finger motion onto dexterous robot hand joints. No learned embeddings, no fancy transfer algorithms needed. Relative wrist motion + retargeted 22-DoF finger actions serve as a unified action space that carries through from pre-training to robot execution.
Our recipe is called "EgoScale":
- Pre-train GR00T N1.5 on 20K hours of human video, mid-train with only 4 hours (!) of robot play data with Sharpa hands. 54% gains over training from scratch across 5 highly dexterous tasks.
- Most surprising result: a *single* teleop demo is sufficient to learn a never-before-seen task. Our recipe enables extreme data efficiency.
- Although we pre-train in 22-DoF hand joint space, the policy transfers to a Unitree G1 with 7-DoF tri-finger hands. 30%+ gains over training on G1 data alone.
The scalable path to robot dexterity was never more robots. It was always us.
Deep dives in thread:
English
Nate Gillman ری ٹویٹ کیا
Nate Gillman ری ٹویٹ کیا

Latent representations are pervasive in modern generative modelling at scale, because iterative refinement in a compact latent space is much more cost-effective, and latents can be decoded to pixels in a single forward pass.
...but what if your generative model itself only needs one step?🤔 Then the trade-off could change!
Pixel Mean Flows apply the recent "improved Mean Flow" formulation for learning flow maps (iMF, arxiv.org/abs/2512.02012) directly to pixels. The authors reparameterise the neural network to make predictions in input space, as opposed to velocity space, in order to deal with the larger number of input dimensions -- much like JiT (arxiv.org/abs/2511.13720) and BOOT (arxiv.org/abs/2306.05544).
This yields a very elegant algorithm for training a one-step generative model in pixel space from scratch👌
Paper: arxiv.org/abs/2601.22158


English
Nate Gillman ری ٹویٹ کیا
Nate Gillman ری ٹویٹ کیا

🔥 Very excited to share that we’re releasing LingBot-World 🌍 @robbyant_brain — an open-source frontier world model!
We’re pushing the limits of:
🔹 High-Fidelity Simulation & Precise Control
🔹 Long-Horizon Consistency & Memory
🔹 Modeling Physical & Game Worlds
The most surprising part? The emergence of sophisticated behaviors that go beyond simple video generation.
👇I’m obsessed with this dragon demo 🐉. It can rollout for 1 min while maintaining crisp visual dynamics and consistent memory!
English
Nate Gillman ری ٹویٹ کیا
Nate Gillman ری ٹویٹ کیا
Nate Gillman ری ٹویٹ کیا
Nate Gillman ری ٹویٹ کیا

New #NVIDIA Paper
We introduce Motive, a motion-centric, gradient-based data attribution method that traces which training videos help or hurt video generation.
By isolating temporal dynamics from static appearance, Motive identifies which training videos shape motion in video generation.
🔗 research.nvidia.com/labs/sil/proje…
1/10
English





