Eliot Xing

54 posts

Eliot Xing

Eliot Xing

@etaoxing

phd student @cmu_robotics @scsatcmu • prev. @georgiatech • reinforcement learning, differentiable simulation, robotics

Katılım Temmuz 2018
321 Takip Edilen390 Takipçiler
Eliot Xing retweetledi
NVIDIA Robotics
NVIDIA Robotics@NVIDIARobotics·
Newton 1.0 is now generally available. 🙌 Take robot learning to the next level with: 🤖 Stable Articulated & Complex Mechanism Simulation – accurate, reliable machine modeling. 🖐️ High-Fidelity Hydroelastic Contact Modeling – realistic soft contact and touch-based interactions. 🧵 Deformable Body Simulation – simulate cables, cloth, rubber, and other elastic materials with VBD. ⚡ Accelerated Robot Learning at Scale – seamless integration with open simulation and learning frameworks, NVIDIA Isaac Sim and Isaac Lab for scalable workflows. Learn how to integrate this open-source physics engine into your workflow: nvda.ws/3NGTzUo #NVIDIAGTC
English
25
159
1.1K
80K
Eliot Xing
Eliot Xing@etaoxing·
check out the latest from @TairanHe99 and co. on visual sim2real humanoid loco-manipulation!! really enjoyed the timeline documenting the sim2real journey
Tairan He@TairanHe99

The Journey. This wasn't an overnight success. It took 6 months of building visual sim, distributed training, and infra from scratch. Robotics is hard. We’ve documented our failures and our path to success here: #sim2real-journey" target="_blank" rel="nofollow noopener">viral-humanoid.github.io/#sim2real-jour… 7/

English
0
1
17
4.2K
Eliot Xing retweetledi
Zhengyi “Zen” Luo
Zhengyi “Zen” Luo@zhengyiluo·
How do you give a humanoid the general motion capability? Not just single motions, but all motion? Introducing SONIC, our new work on supersizing motion tracking for natural humanoid control. We argue that motion tracking is the scalable foundation task for humanoids. So we "supersized" it: 9k+ GPU hours and 100M+ motion frames. But tracking alone is not enough; we show how to make a useful control system out of it: - Universal Kinematic Planner: Enables game-like gamepad control and high-level teleoperation, just like controlling a character in a game. - VR Full-Body Teleop: Direct, real-time whole-body control by a human wearing a VR headset. - VR Keypoint Teleop: Control the upper body (hands/head) while our planner handles robust locomotion automatically. - VLA Integration: We connect this motion tracker to autonomous Visual-Language-Action (VLA) models for autonomous task execution! We use a Universal Token Space to UNIFY this command space, turning our robust tracker into a general-purpose, programmable humanoid brain. This is the generalist "System 1" for humanoids. 🚀 Project: nvlabs.github.io/SONIC/ #Humanoids #Robotics #AI #FoundationModels #NVIDIAResearch 🧠🔥
English
8
59
236
61K
Eliot Xing retweetledi
Wenli Xiao
Wenli Xiao@_wenlixiao·
What if robots could improve themselves by learning from their own failures in the real-world? Introducing 𝗣𝗟𝗗 (𝗣𝗿𝗼𝗯𝗲, 𝗟𝗲𝗮𝗿𝗻, 𝗗𝗶𝘀𝘁𝗶𝗹𝗹) — a recipe that enables Vision-Language-Action (VLA) models to self-improve for high-precision manipulation tasks. PLD couples real-world residual reinforcement learning with standard supervised fine-tuning — letting robots discover, recover, and distill their own data flywheel. Quick 🧵
English
26
153
743
182.7K
Eliot Xing retweetledi
NVIDIA Robotics
NVIDIA Robotics@NVIDIARobotics·
The laws of physics apply everywhere. ⚛️ Co-developed by NVIDIA, @GoogleDeepMind and Disney Research, Walt Disney Imagineering, the new Newton Beta, managed by the @linuxfoundation, now also runs with Isaac Lab and with MuJoCo Warp, which delivers warp speed for robot learning. Learn how to train a quadruped and simulate multiphysics in our tech blog. 🔗 nvda.ws/4gP2woL #CoRL2025
English
25
125
809
83K
Eliot Xing
Eliot Xing@etaoxing·
cool work from @Jsphamigo! decouple first-order RL with data from nondifferentiable simulation + gradients from a learned differentiable dynamics model, for large sample efficiency gains and quadruped sim2real
Joseph Amigo@Jsphamigo

Introducing our new work DMO: Decoupled Model-based policy Optimization! First-order gradient RL that unrolls trajectories with high-fidelity sims & computes gradients via learned models. Paper & demos: machines-in-motion.github.io/DMO/ #CoRL2025 w/ @Rk4342R

English
0
0
3
867
Eliot Xing retweetledi
Carolina Parada
Carolina Parada@parada_car88104·
📣MuJoCo announcement 📣 Thrilled to share that @GoogleDeepMind has unveiled MuJoCo-Warp at @nvidia's #GTC25! 🚀 We've expanded our open-source MuJoCo simulator with MuJoCo-Warp, leveraging NVIDIA’s Warp framework for incredible acceleration. This marks a significant step in making high-performance simulation more accessible.
GIF
English
16
80
607
81.4K
Eliot Xing
Eliot Xing@etaoxing·
exciting news on (differentiable?) simulation using Warp
English
1
0
38
3.4K
Eliot Xing
Eliot Xing@etaoxing·
Safety is hard to ensure in human-robot interaction with rigid manipulators. We present a soft robot system to provide safer interaction for hair care. @UksangYoo has also already used his soft manipulator for more dexterous tasks like pen spinning, check out his full thread!
Uksang Yoo@UksangYoo

🎉Excited to share that our paper was a finalist for best paper at #HRI2025! We introduce MOE-Hair, a soft robot system for hair care 💇🏻💆🏼 that uses mechanical compliance and visual force sensing for safe, comfortable interaction. Check it out: moehair.github.io 🧵1/7

English
0
2
8
1.5K
Eliot Xing
Eliot Xing@etaoxing·
@aaronwetzler no, total fps for forward sim is 1.4k with 32 envs, see appendix F.5 of the paper!
English
1
0
0
30
Eliot Xing
Eliot Xing@etaoxing·
RL is notoriously sample inefficient. How can we scale RL on tasks much slower to simulate than rigid body physics, such as soft bodies? In our #ICLR2025 spotlight, we introduce both a new first-order RL algorithm, SAPO, and differentiable simulation platform, Rewarped. 1/n
English
8
59
347
32.6K
Eliot Xing
Eliot Xing@etaoxing·
@Rawlala1 Thanks! We use physics solvers in Warp, with a custom MPM solver for some of the soft bodies.
English
1
0
1
116
Rawlala
Rawlala@Rawlala1·
@etaoxing is this physx under the hood ? very nice work
English
1
0
1
115
Eliot Xing
Eliot Xing@etaoxing·
@aaronwetzler thanks! fps varies depending on tasks, we use a few thousand particles each x 32 envs for mpm simulation. 1 env for handflip runs at 210 fps.
English
1
0
1
225
Aaron Wetzler
Aaron Wetzler@aaronwetzler·
@etaoxing Looks great! What fps did this run when you recorded? And how many facets/particles/triangles for each sim?
English
1
0
1
271