
Gu Zhang
166 posts

Gu Zhang
@Gu__Zhang
CS PhD @Tsinghua_IIIS | Prev. Student Researcher @MIT_CSAIL @SJTU1896 | Research interest focus on dexterous manipulation


🤖Low-data post-training can teach a VLA policy a new robot skill. But it also makes it too attached to the training demos. We call this lock-in🔒: the policy can execute the post-training task, yet fails to respond to seemingly obvious prompt changes. DeLock preserves steerability using only the policy’s own pretrained knowledge. No extra supervision needed!🚀🚀🚀 #Robotics #AI #EmbodiedAI #VLA


What's different between these two BC policies? It's the same architecture, training budget, and data collection setup — the only difference is the controller gains! Controller gains are an understudied design parameter in robot learning. In our new work (w/ @BronarsToni*, @pulkitology), we show how they act as an inductive bias across BC, RL, and Sim2Real transfer, with real consequences on performance. Here's what we found 🧵 * Equal Contribution 📄arxiv: arxiv.org/abs/2604.02523 🔗website: younghyopark.me/tune-to-learn/

Hear more about GEN-1 and improvisational intelligence from our own researchers @coolboi95 and @felixwyw








#CVPR2026 Introducing UniDex — a foundation suite for universal dexterous hand control from egocentric human videos. We build a complete pipeline: human videos → 50K+ dexterous trajectories across 8 robot hands → one unified vision-language-action policy that controls all of them. Key highlights: 📊 UniDex-Dataset: 50K+ trajectories, 8 hands (6-24 DoFs) 🤖 FAAS: a unified dexterous action space enabling cross-embodiment transfer 🧠 UniDex-VLA: one 3D VLA policy for all different hands 🎯 81% task progress on 5 challenging tool-use tasks 🔄 Zero-shot cross-hand transfer UniDex is accepted to CVPR 2026. Fully open-sourced. 🌐 Project: unidex-ai.github.io 📄 Paper: arxiv.org/abs/2603.22264 💻 Code: github.com/unidex-ai/UniD… 🤗 Model: huggingface.co/UniDex-ai/UniD… 🧵 ↓

#CVPR2026 Introducing UniDex — a foundation suite for universal dexterous hand control from egocentric human videos. We build a complete pipeline: human videos → 50K+ dexterous trajectories across 8 robot hands → one unified vision-language-action policy that controls all of them. Key highlights: 📊 UniDex-Dataset: 50K+ trajectories, 8 hands (6-24 DoFs) 🤖 FAAS: a unified dexterous action space enabling cross-embodiment transfer 🧠 UniDex-VLA: one 3D VLA policy for all different hands 🎯 81% task progress on 5 challenging tool-use tasks 🔄 Zero-shot cross-hand transfer UniDex is accepted to CVPR 2026. Fully open-sourced. 🌐 Project: unidex-ai.github.io 📄 Paper: arxiv.org/abs/2603.22264 💻 Code: github.com/unidex-ai/UniD… 🤗 Model: huggingface.co/UniDex-ai/UniD… 🧵 ↓






