
Ryan Hoque
192 posts

Ryan Hoque
@ryan_hoque
Robotics Research @Meta. Ex-Apple, PhD @berkeley_ai


A year ago, I took a big bet and shifted my research to world models. We started with navigation, but the vision was broader: simulate any interaction with the environment, including fine grained manipulation. Today we introduce DexWM, a world model for dexterous manipulation. Trained on 900+ hours of human and robot video, DexWM lets us imagine, plan, and execute dexterous actions on a real robot.

@_amirbar Year-long research bets are where breakthroughs hide, but also where most teams run out of runway. What was the signal that told you manipulation was ready to move from theory to implementation?

Very happy that EgoDex received Best Paper Awards of 1st EgoAct workshop at #RSS2025! Huge thanks to the organizing committee @SnehalJauhri @GeorgiaChal @GalassoFab10 @danfei_xu @YuXiang_IRVL for putting out this forward-looking workshop. Also kudos to my colleagues @ryan_hoque David Yoon, Mouli Sivapurapu, @jian_zhang_ !

Continued working on the ego-dex dataset, I ported the entire test set to @rerundotio and created a @Gradio app to view it! Links below VVV This allows for a straightforward way to explore each episode of the (test) dataset and better understand how the hand-tracking and slam systems performed. I had to sadly reencode the videos to AV1, which took up a ton of time (nearly 2 hours of wall time for just the test dataset) Next up is taking this representative dataset and making it amenable to training. I'll start with something easy, such as pose estimation, as it's what I'm most familiar with, but the goal is to allow RRD <-> Webdataset standard.








Imitation learning has a data scarcity problem. Introducing EgoDex from Apple, the largest and most diverse dataset of dexterous human manipulation to date — 829 hours of egocentric video + paired 3D hand poses across 194 tasks. Now on arxiv: arxiv.org/abs/2505.11709 (1/4)

Congratulations Dr. Allen Ren @allenzren! What an incredible honor it's been to have you in our lab over the past 5.5 years, and to learn from you. Very excited to follow your next steps in bringing general-purpose AI into the physical world!


ARC-AGI scores for past five years of OpenAI models (updated w/ release dates)


🚨 New research from my team at Apple - real-time augmented reality robot feedback with just your hands + Vision Pro! Paper: arxiv.org/abs/2412.10631 Short thread below -



