Jinkun Cao

170 posts

Jinkun Cao

Jinkun Cao

@jinkuncao

Research Scientist at Meta MSL and previously FAIR Ex: PhD @CMU_Robotics, BS @sjtu1896 Work on computer vision and robotics.

San Francisco Katılım Mayıs 2014
374 Takip Edilen748 Takipçiler
Chen Bao
Chen Bao@chenbao191541·
Last year I joined ARI as a founding researcher with one mission: build industry-grade physical AI for humanoids. Today, ARI is joining @Meta Superintelligence Labs. I feel incredibly lucky to have worked alongside some of the world's best roboticists and engineers — this team has taught me more than I could have imagined. Grateful to the founders, investors, and every single person who made this journey possible. Can't wait to see what we build next!
Bloomberg@business

Meta Platforms Inc. has acquired Assured Robot Intelligence, a startup developing artificial intelligence models for robots, as part of a major initiative to build humanoid technology. bloomberg.com/news/articles/…

English
5
3
51
6.1K
Xiaolong Wang
Xiaolong Wang@xiaolonw·
Excited to share that Assured Robot Intelligence (ARI) has joined @Meta to help build the future of humanoid intelligence! When we started ARI one year ago, our mission was clear: achieve physical AGI. Through deep customer engagements and real-world deployments, it became clear to us that serving the massive opportunity ahead requires training a truly general-purpose physical agent. We believe this agent will be humanoid — and that scaling will come from learning directly from human experience, not teleoperation alone. Meta’s ecosystem brings together the key components needed to make this vision possible. We will be joining Meta Superintelligence Labs (MSL) to help bring personal superintelligence into the physical world. We are incredibly grateful to the brilliant minds, robotics researchers, engineers, partners, and supporters who have worked with us on this journey. Thank you to our investors and angels, led by @aixventureshq , for believing in our mission. This is just the beginning.
Bloomberg@business

Meta Platforms Inc. has acquired Assured Robot Intelligence, a startup developing artificial intelligence models for robots, as part of a major initiative to build humanoid technology. bloomberg.com/news/articles/…

English
110
59
689
179.3K
Lerrel Pinto
Lerrel Pinto@LerrelPinto·
ARI is joining @Meta! Over the past year, we have been building ARI (Assured Robot Intelligence) with the mission to build industry-grade physical AI for humanoids. The ARI stack is built on human experience, condensed into actionable tokens that can be rapidly adapted to real-world hardware. But the most rewarding part of ARI has been the people. I feel truly blessed to have worked alongside some of the world's best roboticists, a top-notch investor pool led by @aixventureshq, and the many supporters pushing for us behind the scenes. Starting next week, ARI will join the Meta Superintelligence Labs (MSL) to continue advancing frontier robotics models that advance personal superintelligence in the physical world. We have the potential to transform AI that can think and talk to AI that can do, assisting humans safely and reliably in the physical world. To the many people behind the scenes who supported us: Thank you! This is just the beginning. More in the Bloomberg article:
Bloomberg@business

Meta Platforms Inc. has acquired Assured Robot Intelligence, a startup developing artificial intelligence models for robots, as part of a major initiative to build humanoid technology. bloomberg.com/news/articles/…

English
35
38
364
51.2K
Jinkun Cao retweetledi
tingwu.wang
tingwu.wang@TingwuWang·
What is missing to bring real-time motion research into AAA games and real-world robotics? We present MotionBricks, a step toward bridging this gap with two key components: - a single generative latent motion backbone covering 350,000+ motion skills, running at 15,000 FPS with 2 ms latency and substantially improved quality and reliability. - a unified smart primitive interface for locomotion, object / scene interaction, with fine-grained control over generated behaviors. Webpage: nvlabs.github.io/motionbricks/ Code: github.com/NVlabs/GR00T-W… Paper: arxiv.org/abs/2604.24833 (ACM TOG / SIGGRAPH 2026)
English
25
149
1.2K
145.4K
Jinkun Cao
Jinkun Cao@jinkuncao·
Learn from @zhengyiluo everyday!
RoboPapers@RoboPapers

How can we build a general-purpose “foundation model” for robot motion? @zhengyiluo joins us to talk about SONIC, which uses motion tracking as a foundational task for humanoid robot control, and scales humanoid control training to 9k GPU hours and 100 million frames worth of data. The result: a model with a generally-useful embedding space that can be controlled by a VLA, or from human video, to perform a wide variety of humanoid whole-body-control tasks, including with zero-shot transfer to previously unseen motions. Watch episode 72 of RoboPapers, with @micoolcho and @DJiafei, now!

English
0
0
4
453
Jinkun Cao retweetledi
Ropedia
Ropedia@ropedia_ai·
Today Ropedia releases Xperience-10M at #GTC day 1 — World largest real human 4D interaction dataset at 10M scale. Each trajectory aligns: • visual observations • spatial structure • human motion • interaction dynamics • task semantics A new foundation for physical and spatial AI, try it out @huggingface huggingface.co/datasets/roped…
English
4
25
95
52.2K
Jinkun Cao retweetledi
JulianSaks
JulianSaks@JulianSaks·
Introducing Humanoid Atlas, the Bloomberg Terminal for humanoids. Every OEM, every supplier, every dependency humanoids.fyi
JulianSaks tweet media
English
80
196
1.4K
241.9K
Jinkun Cao
Jinkun Cao@jinkuncao·
@carlosedubarret Yes 3DB uses MHR. GEM uses SOMA which is a new body model that can take MHR/Anny/SMPL parameters as input. But I believe the default is still SMPL.
English
1
0
1
37
Carlos Barreto
Carlos Barreto@carlosedubarret·
Oh, that is right. SAM3DBody uses MHR as a body model. So GEM probably is using MHR. Actually reading again on the installation page looks like it uses SOMA model.. I dont remember where I got the idea that GEM might use MHR (that I called SAM3d Body), Anny, SMPL or something else. But I do remember seeing somewhere that I could choose what model to run... Anyway, thanks for the explanation and sorry for the confusion. Doing things in a rush is never a good thing...
English
1
0
1
88
Carlos Barreto
Carlos Barreto@carlosedubarret·
Yeah, just made a test using GEM and SAM body 3d, loading in blender If I got it right, this is problably the first solution that we can make monocular video to animation for free and use it commercially (I'm not sure yet) You can see that the animation isnt perfect, I'm not sure if its a problem on the code that its importing in blender or if it was GEM that detected like that. Gotta test more. #b3d
English
6
6
105
8.7K
Jinkun Cao
Jinkun Cao@jinkuncao·
@carlosedubarret Got it. so you use GEM only. I was confused because GEM and SAM3D body (3DB) are basically for the same task (image/video-based human pose/mesh recovery). 3DB regresses output while GEM does noisy-to-clean diffusion. I thought your used 3DB's output as the noisy input to GEM.
English
1
0
1
67
Carlos Barreto
Carlos Barreto@carlosedubarret·
BTW I did it in a couple of hours (which also accounted for the installation of GEM) I got a bit confused on what GEM uses, I'm not completely sure it was SAM 3D body, it might be something else, because the installation downloaded the checkpoints automatically. I didnt have to get it manually on the hugging faces repo. and from what I saw the files were not exactly the ones that I had to download from the hugging face repo (when I tested the code from the official SAM3d body github)
English
1
0
0
104
Jinkun Cao retweetledi
Umar Iqbal
Umar Iqbal@UmarIqb·
#NVIDIA just released a whole ecosystem for human(oid) motion and robot learning from human data. 🚀🦾 Data, as we all know, is the key to scaling AI models. To accelerate the field of Embodied AI, we have open-sourced a full stack of models and tools to capture, generate, retarget, and simulate human(oid) motion data at scale, along with a massive high-quality dataset and a standard human skeletal representation, SOMA, to make them all seamlessly communicate with each other. The entire suite is available under the Apache 2.0 license. 1️⃣ SOMA: A universal interface to unify all parametric human body models (SOMA-shape, SMPL, MHR, etc.) into a standard skeletal representation, eliminating the need for custom adapters or model-specific retargeting. 🔗 lnkd.in/gsxhiJnn 2️⃣ Kimodo: High-fidelity, controllable text-to-motion generation for both humans and humanoid robots. 🔗 lnkd.in/gCc84XnX 3️⃣ GEM: A global human pose estimation method from in-the-wild videos, natively compatible with SOMA. 🔗 lnkd.in/g_QAvRjn 4️⃣ Bones-SEED: A massive dataset of 150k+ motions in SOMA format, including data already retargeted for the Unitree G1, created with our partners at Bones Studio. 🔗 lnkd.in/gfx-QD-w 🔗 lnkd.in/gyNdTwQx 5️⃣ SOMA Retargeter: A dedicated tool for seamless motion retargeting from the SOMA skeleton to the Unitree G1. 🔗 lnkd.in/gqz9Na-H 6️⃣ ProtoMotions: Our high-performance simulation framework for training digital human(oid)s via RL, now with native SOMA support. 🔗 lnkd.in/gmvMikMU This is just the beginning, and we have much more in the pipeline. Excited to see what the community builds next! #NVIDIA #GTC #GTC2026 #Robotics #EmbodiedAI #PhysicalAI @NVIDIAAI
English
5
79
424
46K
Jinkun Cao retweetledi
Davis Rempe
Davis Rempe@davrempe·
Need high-quality motion for humanoid robots or digital humans? Meet Kimodo: our new diffusion model trained on 700 hours of optical mocap data for easy, controllable, and high-fidelity motion generation. @NVIDIAAI research.nvidia.com/labs/sil/proje…
English
5
53
206
41.3K
Jinkun Cao retweetledi
NVIDIA Robotics
NVIDIA Robotics@NVIDIARobotics·
Newton 1.0 is now generally available. 🙌 Take robot learning to the next level with: 🤖 Stable Articulated & Complex Mechanism Simulation – accurate, reliable machine modeling. 🖐️ High-Fidelity Hydroelastic Contact Modeling – realistic soft contact and touch-based interactions. 🧵 Deformable Body Simulation – simulate cables, cloth, rubber, and other elastic materials with VBD. ⚡ Accelerated Robot Learning at Scale – seamless integration with open simulation and learning frameworks, NVIDIA Isaac Sim and Isaac Lab for scalable workflows. Learn how to integrate this open-source physics engine into your workflow: nvda.ws/3NGTzUo #NVIDIAGTC
English
25
157
1.1K
81.1K
Jinkun Cao retweetledi
Zhengyi “Zen” Luo
Zhengyi “Zen” Luo@zhengyiluo·
288 hours of high-quality, text-annotated human motion data are now available! 140k motion sequences! Do you know that a large part of SONIC's training data is now open-sourced? Check out the dataset here 👇🏻 from our friends at Bones Studio! Full human + G1 retargeted motion! Stie🌐:bones.studio/datasets/seed Data💿:huggingface.co/datasets/bones… SONIC training code coming VERY VERY soon!
English
7
64
311
28.9K
Sasha Sax
Sasha Sax@iamsashasax·
In a couple weeks I'm joining @AnthropicAI to work on pretraining after nearly 3 years at FAIR, developing post-training flywheels for physical intelligence (like SAM 3D) I'm stoked to build new capabilities for a model I personally love, with such thoughtful people
English
35
9
647
26.5K
Jinkun Cao retweetledi
Jon Barron
Jon Barron@jon_barron·
If I was a grad student today, I would: 1) Not write papers, 2) push my (agent-written) code to a public repo ~weekly, 3) maintain (via agents) a writeup.tex (manually verified) and a skill.md in the repo, and 4) work towards establishing skill usage as the new "citation" format.
English
16
28
573
94.7K