
Yide Shentu
35 posts








[Accepted to ICRA 2026!] 🚀 Introducing EgoMI: An egocentric manipulation interface that captures synchronized 6-DoF head and hand trajectories from egocentric human demonstrations! Transfers to IL policies zero-shot w/o visual augmentation or on-embodiment data. 1/n



World's first demonstration of a robot able to tie shoelaces & hang t-shirts... 𝗮𝘂𝘁𝗼𝗻𝗼𝗺𝗼𝘂𝘀𝗹𝘆𝗅 ALOHA Unleashed 🌋: A Simple Recipe for Robot Dexterity They trained a diffusion policy at scale: 26,000 demonstrations over 5 tasks on Aloha 2 robot! Recent research shows how robots can learn new skills by copying humans, especially in difficult tasks using both hands. ✅ Large-scale data collection on ALOHA 2 helps robots learn complex tasks. ✅ Robots can be taught using a special AI model called Diffusion Policy, which makes learning easier and faster. ✅ This system works for tasks like hanging a shirt, tying shoelaces, and stacking kitchen items. Robots are getting better at learning tough tasks, just by watching and copying humans! Paper: lnkd.in/dQbD5Kvn Project: lnkd.in/dEJsj_cD —- Weekly robotics and AI insights. Subscribe free: scalingdeep.tech








🚀 Franka Research 3 now integrates with GELLO – a ROS 2-based framework for real-time teleoperation! Gripper control (Franka Hand & Robotiq) for HRI, prototyping & more. Get started: franka-community.de/t/new-release-… GitHub: github.com/wuphilipp/gell… #FrankaFR3 #ROS2 #TeleoperationFrankaFR3 #ROS2 #Teleoperation




When will robots help us with our household chores? TidyBot++ brings us closer to that future. Our new open-source mobile manipulator makes it more accessible and practical to do robot learning research outside the lab, in real homes!



Vision-language models perform diverse tasks via in-context learning. Time for robots to do the same! Introducing In-Context Robot Transformer (ICRT): a robot policy that learns new tasks by prompting with robot trajectories, without any fine-tuning. icrt.dev [1/N]




