OpenDriveLab

213 posts

OpenDriveLab banner
OpenDriveLab

OpenDriveLab

@OpenDriveLab

Official account for OpenDriveLab @hkuniversity and Beyond. We do cutting-edge research in Robotics, Autonomous Driving. Email: [email protected]

Hong Kong 가입일 Haziran 2022
57 팔로잉1.6K 팔로워
고정된 트윗
OpenDriveLab
OpenDriveLab@OpenDriveLab·
📣 #Recognition #ResearchAward As we usher in the Year of Horse, our team recognizes outstanding members from the past year in the exceptional contribution of areas. Congratulations!!! Let’s rock ‘n’ roll in 2026 🕶️🍾🎆🥂#opendrivelab
OpenDriveLab tweet mediaOpenDriveLab tweet mediaOpenDriveLab tweet mediaOpenDriveLab tweet media
English
2
2
8
798
OpenDriveLab
OpenDriveLab@OpenDriveLab·
Proud to announce the strategic partnerships with 3 Leading Embodied AI companies. Together with Unitree, Noitom Robotics, and BrainCo, HKU’s Embodied Intelligence Joint Lab is live at our Zhangjiang base. We’re in this for the long game: turning embodied intelligence into a durable, shared foundation for what comes next. Builders: let’s collaborate. Press Release: hku.hk/press/news_det… @HKUniversity @hkudatascience @HKU_CDS @YiMaTweets @francislee2020 @UnitreeRobotics @noitomrobotics @BrainCo_Tech #EmbodiedAI #Robotics
OpenDriveLab tweet mediaOpenDriveLab tweet media
English
0
4
18
3.2K
OpenDriveLab 리트윗함
Yixuan Pan
Yixuan Pan@yixuanpan99·
Just saw Sonic — it’s incredibly solid. By far the best whole-body control model I’ve seen so far. It opens up an invaluable path to acquiring full-body coordination data for humanoid robots. Huge congrats to the team and big thanks for open-sourcing it! 🙌🔓
Zhengyi “Zen” Luo@zhengyiluo

SONIC is now open-source! Generalist whole-body teleoperation for EVERYONE! Our team has long been building comprehensive pipelines for whole-body control, kinematic planner, and teleoperation, and they will all be shared. This will be a continuous update; inference code + model already there, training code and gr00t integration coming soon! Code: github.com/NVlabs/GR00T-W… Docs: nvlabs.github.io/GR00T-WholeBod… Site: nvlabs.github.io/GEAR-SONIC/

English
2
10
56
5.5K
OpenDriveLab 리트윗함
Jitendra MALIK
Jitendra MALIK@JitendraMalikCV·
At the RI seminar at CMU yesterday, I presented a 3 level analysis of robot skills & discussed the pros and cons of teleoperation, simulation, and learning from videos, before presenting our research. Enjoy! youtube.com/watch?v=ry8iti…
YouTube video
YouTube
English
9
38
357
100.7K
OpenDriveLab
OpenDriveLab@OpenDriveLab·
📣 #Recognition #ResearchAward As we usher in the Year of Horse, our team recognizes outstanding members from the past year in the exceptional contribution of areas. Congratulations!!! Let’s rock ‘n’ roll in 2026 🕶️🍾🎆🥂#opendrivelab
OpenDriveLab tweet mediaOpenDriveLab tweet mediaOpenDriveLab tweet mediaOpenDriveLab tweet media
English
2
2
8
798
OpenDriveLab 리트윗함
Jiazhi Yang
Jiazhi Yang@jiazhi_yang2024·
🧐Applying world models to improve real-world policy on challenging manipulation tasks used to be considered out of reach. 😌After sustained effort, we’re now seeing encouraging progress. 🚀Thrilled to introduce RISE: Self-Improving Robot Policy with Compositional World Model opendrivelab.com/kai0-rl/ arxiv.org/abs/2602.11075 RISE is, to our knowledge, the first work to use a world model as an effective learning environment for challenging real-world manipulation, enabling policy improvement on tasks that demand high dynamics, dexterity, and precision. Incredible teamwork with @lin_kunyang111 @francislee2020 @YueXiangyu @HaoZhao_AIRSUN @smch_1127
English
9
55
315
46.8K
OpenDriveLab 리트윗함
Chonghao Sima
Chonghao Sima@smch_1127·
🧥 Live-stream robotic teamwork that folds clothes. 6 clothes in 3 minutes straight. χ₀ = 20hrs data + 8 A100s + 3 key insights: - Mode Consistency: align your distributions - Model Arithmetic: merge, don't retrain - Stage Advantage: pivot wisely 🔗 mmlab.hk/research/kai0
English
1
3
12
1.8K
OpenDriveLab 리트윗함
Modi Shi
Modi Shi@idomihs·
Humanoid robots have been prisoners of the lab. We set them free — with human data. We present EgoHumanoid: The first endorsement of human-to-humanoid transfer for whole-body loco-manipulation. 🔗 Home: opendrivelab.com/EgoHumanoid 📑 Arxiv: arxiv.org/abs/2602.10106 🧵👇
English
5
15
61
8.2K
OpenDriveLab
OpenDriveLab@OpenDriveLab·
⁉️ Humans perform loco-manipulation everywhere, every day.
What if robots could learn from that? 📣 Our latest work EgoHumanoid introduces the first human–humanoid co-training framework, transferring in-the-wild #Egocentric data to real humanoids. 🎯 The future of robotics is human-driven data at scale.
Let’s build the #DataInfra together.
Modi Shi@idomihs

Humanoid robots have been prisoners of the lab. We set them free — with human data. We present EgoHumanoid: The first endorsement of human-to-humanoid transfer for whole-body loco-manipulation. 🔗 Home: opendrivelab.com/EgoHumanoid 📑 Arxiv: arxiv.org/abs/2602.10106 🧵👇

English
0
1
8
930
OpenDriveLab
OpenDriveLab@OpenDriveLab·
【5/5】Big shout to our amazing collaborators Siqi Liang, @ilnehc , Yuxian Li, Yukuan Xu, Yichao Zhong, Fu Zhang, @francislee2020 🎆🎆
Indonesia
0
0
3
133
OpenDriveLab
OpenDriveLab@OpenDriveLab·
【4/5】The ultimate test? Tai Ping Shan. 🏔️ Steep slopes, long horizons, pitch-black night - all in one. SparseVideoNav handles it like a champ. 🔥
English
1
0
2
137
OpenDriveLab 리트윗함
Chonghao Sima
Chonghao Sima@smch_1127·
Glad to share me and my folks work of the last research cycle, good luck and have fun. Upvote on HF is appreciated! χ0: huggingface.co/papers/2602.09… χ0, a resource-efficient robotic manipulation framework addresses distributional shifts through model arithmetic, stage-aware advantage estimation, and train-deploy alignment to achieve long-horizon task reliability. RISE: huggingface.co/papers/2602.11… RISE, the first work to use a world model as an effective learning environment for challenging real-world manipulation, enabling policy improvement on tasks that demand high dynamics, dexterity, and precision. EgoHumanoid: huggingface.co/papers/2602.10… EgoHumanoid, the first endorsement of human-to-humanoid transfer for whole-body loco-manipulation. SparseVideoNav: huggingface.co/papers/2602.05… Vision-language navigation systems traditionally require detailed instructions but can be improved by incorporating video generation models with sparse future planning for faster, more efficient real-world deployment.
English
1
15
27
2.5K
OpenDriveLab
OpenDriveLab@OpenDriveLab·
Key Technical Highlights: - 21 fully actuated degree-of-freedom - 350 gram hand weight without remote motors - Never overheating with remote actuation - 15 tactile sensors with 1mm spatial and 0.1N force resolution - 2 palm cameras with 140° FoV and controllable LED - 21 finger joint angle sensors - 21 tension sensing & self-tighting devices - Tendon connectors for rapid finger / motor replacement - SPI/I2C interfaces for in-hand sensors - TTL/CAN interfaces for standard motors - 3D-printed structure for easy mechanical modifications - Arduino programming for easy software modifications - Fully open-source including CAD, assembly guide, electronics, and software - 1400 USD material Cost
English
0
0
4
290
OpenDriveLab
OpenDriveLab@OpenDriveLab·
We're inviting a small group of hands-on researchers and makers to co-iterate MM-Hand 1.0, an open-source low-cost dexterous hand. Selected applicants will receive beta-version hardware in early 2026 at cost price (~USD $1,400 but tax not included) and direct engineering help.
English
2
0
4
322