CortexAI

5 posts

CortexAI banner
CortexAI

CortexAI

@cortexairobot

Building the world's most diverse real-world, real-workplace, and industry-scale egocentric + robot dataset.

San Francisco, CA Katılım Nisan 2025
7 Takip Edilen175 Takipçiler
CortexAI retweetledi
Lucas Ngoo
Lucas Ngoo@lucasngoo·
We’re now deploying mobile bimanual robots in-the-wild collecting real-world data, running model evals, and capturing recovery trajectories for RL
English
1
8
33
2K
CortexAI retweetledi
Lucas Ngoo
Lucas Ngoo@lucasngoo·
1/ They pretrain a robot world model on ~44k hours of egocentric human video. Mostly RGB. No detailed action labels. So the question is: how do you learn action-conditioned dynamics from unlabeled video? 2/ Their idea is “latent actions.” They train a VAE that takes two consecutive frames (fₜ, fₜ₊₁) and compresses the transition into a small vector. That vector represents what changed between the frames. It becomes a proxy for the action. 3/ They use these latent actions to condition a video world model: frameₜ + latent_actionₜ → frameₜ₊₁ So instead of passive next-frame prediction, the model learns transitions conditioned on action. They benchmark this and show latent actions perform close to using real hand pose labels (e.g. EgoDex). 4/ After large-scale human pretraining, they post-train on real robots. They reset the action-conditioning layer and replace latent actions with real robot controls. Since the model already learned general physics from human video, much less robot data is needed to adapt to a new embodiment. 5/ They also show that increasing the scale and diversity of human video improves generalization to unseen objects and novel action variations. Now imagine training on 100 million hours of large scale, diverse, real world workplace data. This is the future we are excited to help power at @cortexairobot.
Jim Fan@DrJimFan

Announcing DreamDojo: our open-source, interactive world model that takes robot motor controls and generates the future in pixels. No engine, no meshes, no hand-authored dynamics. It's Simulation 2.0. Time for robotics to take the bitter lesson pill. Real-world robot learning is bottlenecked by time, wear, safety, and resets. If we want Physical AI to move at pretraining speed, we need a simulator that adapts to pretraining scale with as little human engineering as possible. Our key insights: (1) human egocentric videos are a scalable source of first-person physics; (2) latent actions make them "robot-readable" across different hardware; (3) real-time inference unlocks live teleop, policy eval, and test-time planning *inside* a dream. We pre-train on 44K hours of human videos: cheap, abundant, and collected with zero robot-in-the-loop. Humans have already explored the combinatorics: we grasp, pour, fold, assemble, fail, retry—across cluttered scenes, shifting viewpoints, changing light, and hour-long task chains—at a scale no robot fleet could match. The missing piece: these videos have no action labels. So we introduce latent actions: a unified representation inferred directly from videos that captures "what changed between world states" without knowing the underlying hardware. This lets us train on any first-person video as if it came with motor commands attached. As a result, DreamDojo generalizes zero-shot to objects and environments never seen in any robot training set, because humans saw them first. Next, we post-train onto each robot to fit its specific hardware. Think of it as separating "how the world looks and behaves" from "how this particular robot actuates." The base model follows the general physical rules, then "snaps onto" the robot's unique mechanics. It's kind of like loading a new character and scene assets into Unreal Engine, but done through gradient descent and generalizes far beyond the post-training dataset. A world simulator is only useful if it runs fast enough to close the loop. We train a real-time version of DreamDojo that runs at 10 FPS, stable for over a minute of continuous rollout. This unlocks exciting possibilities: - Live teleoperation *inside* a dream. Connect a VR controller, stream actions into DreamDojo, and teleop a virtual robot in real time. We demo this on Unitree G1 with a PICO headset and one RTX 5090. - Policy evaluation. You can benchmark a policy checkpoint in DreamDojo instead of the real world. The simulated success rates strongly correlate with real-world results - accurate enough to rank checkpoints without burning a single motor. - Model-based planning. Sample multiple action proposals → simulate them all in parallel → pick the best future. Gains +17% real-world success out of the box on a fruit packing task. We open-source everything!! Weights, code, post-training dataset, eval set, and whitepaper with tons of details to reproduce. DreamDojo is based on NVIDIA Cosmos, which is open-weight too. 2026 is the year of World Models for physical AI. We want you to build with us. Happy scaling! Links in thread:

English
3
11
102
13.9K
CortexAI retweetledi
Lucas Ngoo
Lucas Ngoo@lucasngoo·
Thanks @Fondocom for the podcast. Shared about: > World models are the equivalent of language models for the physical world: predicting next visual frames and next robot actions. > Scaling laws for robotics world models: Large-scale, diverse, real-world egocentric data leads to better world models, which in turn lead to better robot action predictions from the model. > Progress comes from real-world deployment with humans in the loop: Human operators initially monitor and correct robot trajectories, and that recovery data feeds back into training to gradually increase autonomy.
English
1
9
16
1.6K
CortexAI
CortexAI@cortexairobot·
Cortex AI in Times Square. Accelerating physical AI with large-scale, real-world robotics data. Thanks to @brexHQ for making it happen.
CortexAI tweet media
English
0
3
12
718
CortexAI retweetledi
Y Combinator
Y Combinator@ycombinator·
Cortex AI (@cortexairobot) produces the world's most diverse egocentric and robot dataset, captured in real workplace environments. Leading research labs use Cortex AI to collect data and deploy robotics foundation models in real-world settings.
English
8
6
102
16.1K