Zixuan Chen

115 posts

Zixuan Chen

Zixuan Chen

@C___eric417

PhD student at @UCSanDiego; Bachelor's Degree at @FudanUni

San Diego, CA Katılım Ağustos 2016
306 Takip Edilen469 Takipçiler
Sabitlenmiş Tweet
Zixuan Chen
Zixuan Chen@C___eric417·
🚀Introducing GMT — a general motion tracking framework that enables high-fidelity motion tracking on humanoid robots by training a single policy from large, unstructured human motion datasets. 🤖A step toward general humanoid controllers. Project Website: gmt-humanoid.github.io Paper PDF: gmt-humanoid.github.io/resources/gmt.… Code & Examples: github.com/zixuan417/huma… This work is co-led by me and @jimazeyu, in collaboration with @xuxin_cheng, @Xuanbin_Peng, @xbpeng4, and @xiaolonw
English
3
62
252
39.2K
Zixuan Chen retweetledi
Yue Wang
Yue Wang@yuewang314·
Introducing Ψ₀ (psi-lab.ai/Psi0) — an open foundation model for universal humanoid loco-manipulation. 🏆 Outperforms GR00T N1.6 by 40%+ overall success rate 📉 Uses only ~10% of the pre-training data 📦 Fully open-source: model, data, code, and deployment pipeline 1/10
English
2
41
205
12.6K
Zixuan Chen retweetledi
Davis Rempe
Davis Rempe@davrempe·
Need high-quality motion for humanoid robots or digital humans? Meet Kimodo: our new diffusion model trained on 700 hours of optical mocap data for easy, controllable, and high-fidelity motion generation. @NVIDIAAI research.nvidia.com/labs/sil/proje…
English
5
51
201
33.9K
Zixuan Chen retweetledi
Zhikai Zhang
Zhikai Zhang@Zhikai273·
🎾Introducing LATENT: Learning Athletic Humanoid Tennis Skills from Imperfect Human Motion Data Dynamic movements, agile whole-body coordination, and rapid reactions. A step toward athletic humanoid sports skills. Project: zzk273.github.io/LATENT/ Code: github.com/GalaxyGeneralR…
English
164
643
4.1K
1.3M
Zixuan Chen retweetledi
Jason Peng
Jason Peng@xbpeng4·
A nice little quality-of-life update, MimicKit now supports video logging. You can monitor the agent's behaviors during training on WandB and Tensorboard: github.com/xbpeng/MimicKi… We also added an implementation of Lipschitz-Constrained Policies for training smooth controllers.
English
1
19
138
6.8K
Zixuan Chen retweetledi
Toru
Toru@ToruO_O·
Peeling a potato is trivial -- until you try to make a robot do it with a knife. This is actually one of the hardest problems in manipulation: contact-rich, force-sensitive, and success is subjective. We taught a robot arm to peel with >90% success :D toruowo.github.io/peel
English
15
46
297
29.9K
Zixuan Chen retweetledi
Xialin He
Xialin He@Xialin_He·
Real-world loco-manipulation demands more than replaying fixed reference motions. We argue that true autonomy requires two capabilities: 1️⃣ flexibly leveraging whatever signals are available — dense references, partial cues, state estimates, or egocentric perception 2️⃣ remaining capable when any of these signals are missing or unreliable We introduce ULTRA — an all-in-one controller for unified humanoid loco-manipulation 🤖 It supports: • general reference tracking • sparse goal following • execution with motion capture • execution with egocentric perception 🔗 Project page: ultra-humanoid.github.io
English
3
30
112
11.2K
C Zhang
C Zhang@ChongZitaZhang·
@ChenTessler for me all normalisation seems disturbing the stability in RL....
English
1
0
6
573
Chen Tessler
Chen Tessler@ChenTessler·
The interplay between various components in RL can sometimes be very frustrating 😅 An experiment that wouldn't work is now fine after turning off obs and value normalization. Every other experiment works better with both turned on.
GIF
English
4
1
20
3.7K
Zixuan Chen retweetledi
Unitree
Unitree@UnitreeRobotics·
Unitree Spring Festival Gala Robots —a Full Release of Additional Details 🥳 Dozens of G1 robots achieved the world’s first fully autonomous humanoid robot cluster Kung Fu performance (with quick movement), pushing motion limits and setting multiple world firsts! H2 made striking appearances at both the Beijing main venue and the Yiwu sub-venue, clad in the Monkey King’s heavy armor and riding a “somersault cloud” played by B2W quadruped robot dogs, delivering New Year blessings from the clouds.
English
1.2K
4.5K
27.5K
27.9M
Zixuan Chen retweetledi
Zi-ang Cao
Zi-ang Cao@ziang_cao·
🚀 Introducing CHIP: Adaptive Compliance for Humanoid Control through Hindsight Perturbation! Current humanoids face a trade-off: they are either Agile & Stiff OR Slow & Soft. CHIP breaks this barrier. We enable on-the-fly switching between Compliant (wiping 🧼, collaborative holding 📦) and Stiff (lifting dumbbells 🏋️, opening doors 🚪💪) behaviors—all while maintaining agile skills like running! 🏃💨 Website: nvlabs.github.io/CHIP/ Join me for a deep dive on how CHIP enables adaptive control for complex tasks. 🧵↓
English
10
51
211
23.8K
Zixuan Chen retweetledi
Changwei Jing
Changwei Jing@cwj99770123·
Can we bridge the Sim-to-Real gap in complex manipulation without explicit system ID? 🤖 Presenting Contact-Aware Neural Dynamics — a diffusion-based framework that grounds simulation with real-world touch. Implicit Alignment: No tedious parameter tuning. Tactile-Driven: Captures non-smooth contact events. Consistent: Stable predictions in contact-rich tasks.
English
6
48
312
41.5K
Zixuan Chen retweetledi
Brett Adcock
Brett Adcock@adcock_brett·
Today we're introducing Helix 02 Dancing robots are trivial, the hard part is intelligent control This is our most powerful model to date - able to work across complex tasks & long time horizons x.com/Figure_robot/s…
English
283
294
2.6K
407.3K
Zixuan Chen retweetledi
Yutong Liang
Yutong Liang@YutongLiang_·
How far can we push the limit of in-hand manipulation dexterity? Introducing our work on motion capture: DexterCap & DexterHand ! DexterCap: A high-fidelity motion capture system for intricate in-hand manipulation motion. DexterHand: A dataset featuring true in-hand dexterity, reorientation, finger gaiting, and even manipulating a Rubik's Cube like a speedcuber ! 🧩 - Project Page: pku-mocca.github.io/Dextercap-Page/ - Experience it via our online interactive visualization: lyt0112.com/projects/Dexte… #Animation #CharacterAnimation #MotionCapture #Graphics #EmbodiedAI #DexterousManipulation
Yutong Liang tweet media
English
2
10
36
15.6K
Zixuan Chen retweetledi
Chen Tessler
Chen Tessler@ChenTessler·
At @nvidia, we built ProtoMotions to help us, and researchers world-wide, innovate quickly without compromising on applicability. We're proud to announce ProtoMotions3 -- our biggest release yet! 🧵👇
English
8
55
270
52K
Zixuan Chen retweetledi
Haoru Xue
Haoru Xue@HaoruXue·
Reality of robotics: humanoid kung fu is solved before they can open doors with RGB. Here we are. Introducing the frontier of sim2real at NVIDIA GEAR. 100% sim data. RGB input only. Code name: 𝗗𝗼𝗼𝗿𝗠𝗮𝗻. We are opening the sim-to-real door. doorman-humanoid.github.io 🧵
English
14
82
502
352.3K
Zixuan Chen retweetledi
Xueyan Zou
Xueyan Zou@xyz2maureen·
I will join Tsinghua University, College of AI, as an Assistant Professor in the coming month. I am actively looking for 2026 spring interns and future PhDs (ping me if you are in #NeurIPS). It has been an incredible journey of 10 years since I attended an activity organized by Tsinghua University and decided to change my undergraduate major from Economics to Computer Science, inspired by one of the teammates. During the 10 years, I met with appreciation of many wonderful researchers/professors who led me to continued growth. 🐿️ My research focus will continue to be AI & Robotics, with a specific emphasis on Interactive Embodied Intelligence. You can check my homepage to learn more: maureenzou.github.io/lab.html. I am currently local to San Diego and will be attending #NeurIPS. Please ping me over WeChat or Email if any old or new friends are interested in having a coffee chat! (Really looking forward to meeting as many friends as possible at #NeurIPS) [The photo is one of the places that I will miss a lot in the US]
Xueyan Zou tweet media
English
69
88
1.1K
111.1K
Zixuan Chen retweetledi
Carlo Sferrazza
Carlo Sferrazza@carlo_sferrazza·
Sim-to-real learning for humanoid robots is a full-stack problem. Today, Amazon FAR is releasing a full-stack solution: Holosoma. To accelerate research, we are open-sourcing a complete codebase covering multiple simulation backends, training, retargeting, and real-world inference.
English
20
135
602
206.8K
Zixuan Chen retweetledi
Rui Yan
Rui Yan@Hi_Im_RuiYan·
Meet ACE-F — a novel, foldable teleoperation platform for collecting high-quality robot demonstration data across robot embodiments. Using a specialized soft-controller pipeline, we interpret end-effector positional deviations as virtual force signals to provide the user with force feedback, without requiring expensive sensors! ACE-F simplifies control for a diverse array of robot platforms, making complex tasks that require dexterous manipulation highly intuitive. Check out the project website here! acefoldable.github.io
English
2
14
27
5.1K
Zixuan Chen retweetledi
Tairan He
Tairan He@TairanHe99·
Zero teleoperation. Zero real-world data. ➔ Autonomous humanoid loco-manipulation in reality. Introducing VIRAL: Visual Sim-to-Real at Scale. We achieved 54 autonomous cycles (walk, stand, place, pick, turn) using a simple recipe: 1. RL 2. Simulation 3. GPUs Website: viral-humanoid.github.io Arxiv: arxiv.org/abs/2511.15200 Deep dive with me: 🧵
English
19
158
773
201.1K