Bones Studio

23 posts

Bones Studio banner
Bones Studio

Bones Studio

@TheBonesStudio

The Source of ground-truth multimodal human motion & behavior data for Physical AI — from world models to humanoid control | 5+ years, trusted by leading labs

Katılım Ekim 2025
1 Takip Edilen202 Takipçiler
Sabitlenmiş Tweet
Bones Studio
Bones Studio@TheBonesStudio·
Training humanoid robots? You need motion data. Real, high-fidelity, human motion data. And until now - there was no open dataset purpose-built for humanoid robotics. For 5 years, we've been building the largest enterprise-grade human motion and behavior datasets for embodied AI. Our data powered breakthrough SONIC research. Today, at GTC, with @NVIDIARobotics, we're opening a piece of it to the world. BONES-SEED: → 142,200 motion capture animations → Up to 6 natural language descriptions per motion → Temporal segmentation of every action → Curated for humanoid robotics → In NVIDIA SOMA and Unitree G1 (MuJoCo) formats From text to action. Now yours. Go build → bones.studio/datasets/seed #NVIDIAGTC
English
5
43
172
27.1K
Bones Studio
Bones Studio@TheBonesStudio·
First SONIC, now Kimodo. Great research needs great data. And the @NVIDIA team keeps proving what's possible when you have both. Proud that our motion data is helping push the boundaries of what's possible. →research.nvidia.com/labs/sil/proje…bones.studio/datasets
Davis Rempe@davrempe

Need high-quality motion for humanoid robots or digital humans? Meet Kimodo: our new diffusion model trained on 700 hours of optical mocap data for easy, controllable, and high-fidelity motion generation. @NVIDIAAI research.nvidia.com/labs/sil/proje…

English
0
0
3
378
Zhengyi “Zen” Luo
Zhengyi “Zen” Luo@zhengyiluo·
288 hours of high-quality, text-annotated human motion data are now available! 140k motion sequences! Do you know that a large part of SONIC's training data is now open-sourced? Check out the dataset here 👇🏻 from our friends at Bones Studio! Full human + G1 retargeted motion! Stie🌐:bones.studio/datasets/seed Data💿:huggingface.co/datasets/bones… SONIC training code coming VERY VERY soon!
English
7
64
310
27.1K
Sammy
Sammy@1samdavid·
@TheBonesStudio I love seeing these videos. Much respect for the hard work.
English
1
0
1
12
Bones Studio
Bones Studio@TheBonesStudio·
Video shows you what happened. 3D tells you how. World models need both - synced, frame by frame. #WorldModels #AI
English
1
0
5
160
Bones Studio
Bones Studio@TheBonesStudio·
Thanks @ChenTessler for the shoutout and recognizing the quality! We put a lot of effort into making BONES-SEED clean and artifact-free. Super happy to hear it shows and helps with humanoid training.
Chen Tessler@ChenTessler

BONES-SEED is more than just a larger dataset. This is professional studio mocap. Way higher quality than what was previously available out there. Less artifacts -> transfers directly to higher quality motion, both for animation and robotics! 🤖

English
0
0
0
270
Bones Studio
Bones Studio@TheBonesStudio·
GTC kicks off today! We're at booth 7010 NVIDIA just published SONIC - a foundation model for humanoid control trained on our motion capture data . Working on Physical AI? Humanoid control, dexterous manipulation, sim-to-real pipelines? Lets talk.
Bones Studio tweet media
English
0
0
4
272
Bones Studio
Bones Studio@TheBonesStudio·
Video works for pretraining but fine-tuning models on dexterous tasks requires a different level of precision. 14 optical markers per hand. 120 frames per second. #PhysicalAI #Robotics #EmbodiedAI
English
1
6
72
4.9K
Bones Studio
Bones Studio@TheBonesStudio·
@Zhikai273 @Zhikai273 Incredible results! The way you handled retargeting by separating wrist correction from the latent space and letting the policy compensate - is a really elegant solution. Beautiful to watch. Congrats to the whole team! 🤖🎾 @LianYunrui @josh00_lu
English
0
0
1
1.5K
Zhikai Zhang
Zhikai Zhang@Zhikai273·
🎾Introducing LATENT: Learning Athletic Humanoid Tennis Skills from Imperfect Human Motion Data Dynamic movements, agile whole-body coordination, and rapid reactions. A step toward athletic humanoid sports skills. Project: zzk273.github.io/LATENT/ Code: github.com/GalaxyGeneralR…
English
165
644
4.1K
1.3M
Bones Studio
Bones Studio@TheBonesStudio·
@Rewkang Couldn't agree more. @Rewkang 5 hours enabled this - we have 1,000+ and we're capturing every day. Labeled, multimodal mocap data- locomotion, manipulation, human and object interaction, and a lot in between. Built for robotics.Ready to train.Plenty of room to scale.
English
1
0
4
1K
Andrew Kang
Andrew Kang@Rewkang·
Researchers trained a humanoid robot to play tennis using only 5 hours of motion capture data The robot can now sustain multi-shot rallies with human players, hitting balls traveling >15 m/s with a ~90% success rate AlphaGo for every sport is coming
English
473
1.2K
8.9K
1.9M
Bones Studio
Bones Studio@TheBonesStudio·
Fingers might be the hardest unsolved piece of physical AI. Locomotion and whole-body control have solid research paths and proven approaches. But dexterous manipulation still struggles - partly because good reference motion data is so hard to get. We took a shot at this: captured hand motions down to individual finger joints. Check out the video - the 3D overlay matches a live guitar performance finger-for-finger. Still early, but we hope this is useful for folks working on sim-to-real transfer, imitation learning, or RL for manipulation. If you're building in this space - would love to hear what you need. Let's push it forward together! #PhysicalAI #Robotics #EmbodiedAI
English
0
0
3
122
Bones Studio
Bones Studio@TheBonesStudio·
Foundation models for embodied AI are only as good as the motion data they train on. 1/ What you're seeing: dozens of real human motions captured at scale. What's behind the scenes: directors dedicating hundreds of hours of research to every session and movement category. 2/ Humans are the most scalable motion priors on the planet. But to unlock that, you need high-fidelity, labeled mocap datasets designed specifically for physical AI and world models -not repurposed gaming assets. 3/ This is exactly the data bottleneck the field needs to solve. Scaling laws apply to embodied AI too - and it starts with the training distribution. 4/ Would love to hear what your pipeline actually needs. If you're building foundation models or world models and your model is hungry for motion data - hit reply. Let's build this together! #robotics #embodiedai #physicalai
English
0
0
2
114
Bones Studio
Bones Studio@TheBonesStudio·
@runwayml Extremely impressive. Shows just how universal and powerful world models can be. Today it's a Franka Panda arm — tomorrow it's humanoids. Not a question of if, just when. And of course… a question of the right eval data 😉
English
0
0
0
26
Runway
Runway@runwayml·
Testing robot policies on hardware is slow, expensive and hard to scale. World models offer a promising path to accelerating robot policy development. We're sharing new research from the Runway Robotics team, in which we simulated 8 robot policies inside our General World Model and found 0.95 correlation with real-world results. Those early results point to world model simulation as a practical substitute for hardware evaluation, comparing favorably to existing real-to-sim approaches. Learn more at the link below.
English
6
15
90
13K
Bones Studio
Bones Studio@TheBonesStudio·
Thrilled to see SONIC out in the open! We're incredibly proud that our 700h+ mocap dataset powered this scaling breakthrough — high-fidelity human motion straight from Bones Studio. Huge congrats to the @NvidiaAI GEAR team (@DrJimFan @yukez @zhengyiluo) and big thanks to @simonyuen and @cyrushogg for taking us on this journey. Excited for what the community builds on top of this foundation model!
Zhengyi “Zen” Luo@zhengyiluo

With SONIC, we CAN do many things! Now you can too.

English
0
0
2
214
Bones Studio
Bones Studio@TheBonesStudio·
"The scalable path to robot dexterity was never more robots. It was always us." We couldn't agree more. And as the paper shows, pairing large-scale video with high-precision motion data is what anchors the whole pipeline. Scale matters. Fidelity matters. Both together is what Physical AI needs next.
Jim Fan@DrJimFan

We trained a humanoid with 22-DoF dexterous hands to assemble model cars, operate syringes, sort poker cards, fold/roll shirts, all learned primarily from 20,000+ hours of egocentric human video with no robot in the loop. Humans are the most scalable embodiment on the planet. We discovered a near-perfect log-linear scaling law (R² = 0.998) between human video volume and action prediction loss, and this loss directly predicts real-robot success rate. Humanoid robots will be the end game, because they are the practical form factor with minimal embodiment gap from humans. Call it the Bitter Lesson of robot hardware: the kinematic similarity lets us simply retarget human finger motion onto dexterous robot hand joints. No learned embeddings, no fancy transfer algorithms needed. Relative wrist motion + retargeted 22-DoF finger actions serve as a unified action space that carries through from pre-training to robot execution. Our recipe is called "EgoScale": - Pre-train GR00T N1.5 on 20K hours of human video, mid-train with only 4 hours (!) of robot play data with Sharpa hands. 54% gains over training from scratch across 5 highly dexterous tasks. - Most surprising result: a *single* teleop demo is sufficient to learn a never-before-seen task. Our recipe enables extreme data efficiency. - Although we pre-train in 22-DoF hand joint space, the policy transfers to a Unitree G1 with 7-DoF tri-finger hands. 30%+ gains over training on G1 data alone. The scalable path to robot dexterity was never more robots. It was always us. Deep dives in thread:

English
0
0
2
152