Michael Cho - Rbt/Acc

2.6K posts

Michael Cho - Rbt/Acc banner
Michael Cho - Rbt/Acc

Michael Cho - Rbt/Acc

@micoolcho

I ❤️ robots, cheap hardware, steam engines, XGBoost, Liverpool FC & SG 🇸🇬 | Plane crash survivor | Building @BitRobotNetwork @frodobots

Singapore Katılım Mayıs 2010
2.5K Takip Edilen5.6K Takipçiler
Shenyuan Gao
Shenyuan Gao@ShenyuanGao·
😊 Finally become Dr. Gao! I’m so grateful to everyone who has helped, guided, and encouraged me in this wonderful journey, and I hope we can keep bringing something new to the community :)
Shenyuan Gao tweet mediaShenyuan Gao tweet media
English
15
4
105
10.2K
Ilia
Ilia@IliaLarchenko·
I just won the online round of the LeHome Challenge 2026 🥇 by @LightwheelAI The task was to train a bimanual SO101 robot to fold four different types of garments in simulation. It was a great opportunity to experiment with RL + VLA combinations and learn a lot along the way. I implemented the whole distributed RL pipeline from scratch and experimented with multiple existing approaches and new ideas across model architecture, reward design, RL logic, data collection, and more. The offline round is coming soon during ICRA 2026 in Vienna, so I will keep most of the technical details private for now.
Ilia tweet media
English
15
13
118
8.2K
Tairan He
Tairan He@TairanHe99·
some news: I’ve joined OpenAI. After wrapping up my PhD in Robotics, I’m excited to keep working toward AGI in the physical world. exciting journey ahead :)
Tairan He tweet media
English
229
55
2.3K
202.4K
Keerthana Gopalakrishnan
Keerthana Gopalakrishnan@keerthanpg·
Rode a Waymo in SF today and a human dialed in suddenly over rider support to confirm that I am over 18. TIL you could get carded in an AV.
English
4
0
81
9.8K
Jim Fan
Jim Fan@DrJimFan·
I promise this will be the best 20 min you spend today! Robotics: Endgame, the sequel to my last year's Sequoia AI Ascent talk, "Physical Turing Test". I laid out the roadmap for solving Physical AGI as a simple parallel to the LLM success story. Be a good scientist, copy homework ;) And stay till the end, more easter eggs and predictions for your polymarket! 00:30 DGX-1 origin story at OpenAI, I was there in 2016 signing with Jensen and Elon. Heading to the Computer History Museum! 01:42 The Great Parallel 03:31 Robotics, the Endgame 03:39 Why VLAs fall short 04:32 Video world models as the 2nd pretraining paradigm 06:09 World Action Models (WAM) 07:46 Strategies for robot data collection and the FSD equivalent to physical data flywheel for robot manipulation 11:06 EgoScale and the Dexterity Scaling Law we discovered recently 14:00 Physical RL: bridging the last mile 15:39 DreamDojo: an end-to-end neural physics engine for scaling RL in silico 17:00 Civilizational Technology Tree and my predictions for the near future. Spoiler: it's closer than you think. Thanks to my friends at Sequoia for inviting me back to AI Ascent this year! I had a blast! Last year's talk is attached in the thread if you missed it.
English
134
516
3.2K
464.6K
Michael Cho - Rbt/Acc
Michael Cho - Rbt/Acc@micoolcho·
Expecting great things to come from this lab in Singapore!
Jiafei Duan@DJiafei

Launching my research group, MAGIC (Manipulation and General Intelligence Control) Lab @NUSComputing, Singapore! We focus on building the next generation of human-centric models for robotic manipulation — deployable safely, reliably, and easily in the real world. Our research spans MLLM reasoning, 3D vision, robot learning, simulation, dexterous manipulation, and cross-embodiment learning. Interested in joining? Sign up here and I'll send a reminder email: forms.gle/oJPLR2pLTt8kLC…

English
0
2
16
3.9K
Michael Cho - Rbt/Acc
Michael Cho - Rbt/Acc@micoolcho·
Great survey on world models usage in robotics!
Jianfei Yang@Jianfei_AI

𝐎𝐧𝐞 𝐨𝐟 𝐭𝐡𝐞 𝐦𝐨𝐬𝐭 𝐞𝐱𝐜𝐢𝐭𝐢𝐧𝐠 𝐬𝐡𝐢𝐟𝐭𝐬 𝐢𝐧 𝐫𝐨𝐛𝐨𝐭𝐢𝐜𝐬 𝐫𝐢𝐠𝐡𝐭 𝐧𝐨𝐰 𝐢𝐬 𝐭𝐡𝐚𝐭 𝐫𝐨𝐛𝐨𝐭𝐬 𝐚𝐫𝐞 𝐥𝐞𝐚𝐫𝐧𝐢𝐧𝐠 𝐧𝐨𝐭 𝐨𝐧𝐥𝐲 𝐟𝐫𝐨𝐦 𝐝𝐚𝐭𝐚, 𝐛𝐮𝐭 𝐚𝐥𝐬𝐨 𝐟𝐫𝐨𝐦 “𝐢𝐦𝐚𝐠𝐢𝐧𝐞𝐝” 𝐟𝐮𝐭𝐮𝐫𝐞𝐬. ✨𝐖𝐨𝐫𝐥𝐝 𝐦𝐨𝐝𝐞𝐥𝐬 𝐚𝐫𝐞 𝐦𝐚𝐤𝐢𝐧𝐠 𝐭𝐡𝐢𝐬 𝐩𝐨𝐬𝐬𝐢𝐛𝐥𝐞 𝐚𝐧𝐝 𝐭𝐡𝐞 𝐟𝐢𝐞𝐥𝐝 𝐢𝐬 𝐦𝐨𝐯𝐢𝐧𝐠 𝐢𝐧𝐜𝐫𝐞𝐝𝐢𝐛𝐥𝐲 𝐟𝐚𝐬𝐭. World models, predictive representations of how environments evolve under actions, are quickly becoming one of the central building blocks of modern robotics. They allow robots not only to act, but also to imagine, predict, plan, simulate, and evaluate future outcomes before taking actions in the real world. What makes this field especially exciting is how rapidly it is evolving. In just a short time, we have seen the rise of foundation-scale robotic video generation, controllable simulation, learned physics, and world-guided robot policies. But at the same time, the literature has become highly fragmented across architectures, paradigms, and embodied applications. To help the community keep up, our MARS lab organized and led a comprehensive survey together with an amazing group of researchers, including @HaoranGeng2 , @ZeYanjie, @pabbeel, @JitendraMalikCV, @jiajunwu_cs, @du_yilun, @liuzhuang1234, @mapo1 , @philiptorr , @oier_mees Tatsuya Harada, across @UCBerkeley @Stanford, @Harvard @Princeton @ETH @UniofOxford @UTokyo_News @MSFTResearch. The survey reviews how world models are used for robot policy learning, planning, reinforcement learning, simulation, navigation, autonomous driving, and large-scale embodied video generation, while also summarizing datasets, benchmarks, evaluation protocols, and future research directions. 📖 “World Model for Robot Learning: A Comprehensive Survey” Paper: arxiv.org/abs/2605.00080 Project: ntumars.github.io/wm-robot-surve… Updated Github: github.com/NTUMARS/Awesom… We will also continuously maintain the repository to keep track of newly emerging papers, benchmarks, and resources for the community. #EmbodiedAI #RobotLearning #WorldModel #PhysicalAI #Robotics #FoundationModels

English
1
0
12
3.4K
Tongzhou Mu 🤖🦾🦿
Tongzhou Mu 🤖🦾🦿@tongzhou_mu·
Loved chatting with @micoolcho, @chris_j_paxton, and @DJiafei! Dive in if you’d like more details on Rhoda’s DVA ✨
RoboPapers@RoboPapers

Training robot foundation models faces two key hurdles: how to get enough data to train an effective model, and how to make sure that new skills can be acquired quickly. The team at @RhodaAI believes that the answer is training Direct Video Action models from web data. Web data is plentiful, to the point where Rhoda can train their base model on hundreds of years of video data. And then, with the addition of robot data, they can quickly adapt it to new tasks with as little as 20 hours of in-domain data, performing complex, multi-step manipulation tasks with their purpose-built video foundation model. @tongzhou_mu @ericryanchan and @changanvr joined us to talk more about their approach. Watch Episode #79 of RoboPapers, with @micoolcho, @chris_j_paxton, and @DJiafei, to learn more!

English
2
3
31
4.8K
Michael Cho - Rbt/Acc
Michael Cho - Rbt/Acc@micoolcho·
First time I come across a scale of "300+ yrs" of pre-training data in robotics. The @RhodaAI team definitely onto something. Am re-watching this episode myself! Thanks @tongzhou_mu @ericryanchan @changanvr for the sharing!
RoboPapers@RoboPapers

Training robot foundation models faces two key hurdles: how to get enough data to train an effective model, and how to make sure that new skills can be acquired quickly. The team at @RhodaAI believes that the answer is training Direct Video Action models from web data. Web data is plentiful, to the point where Rhoda can train their base model on hundreds of years of video data. And then, with the addition of robot data, they can quickly adapt it to new tasks with as little as 20 hours of in-domain data, performing complex, multi-step manipulation tasks with their purpose-built video foundation model. @tongzhou_mu @ericryanchan and @changanvr joined us to talk more about their approach. Watch Episode #79 of RoboPapers, with @micoolcho, @chris_j_paxton, and @DJiafei, to learn more!

English
0
2
18
3.4K
Michael Cho - Rbt/Acc
Michael Cho - Rbt/Acc@micoolcho·
@zhou_xian_ Really incredible results! Would love to have u share more on @RoboPapers if u r open to it; my DM open Big congrats again...these are some really incredible results
English
0
0
7
526
Genesis AI
Genesis AI@gs_ai_·
We are back. After one year of quiet building. Introducing GENE-26.5, our first robotic brain that takes a major step toward human-level capability. For years, robotics has struggled to learn from the world’s largest and valuable data source: Humans. Solving it means rethinking the whole stack from the ground up: - A robotics-native foundation model. - A 1:1 human-like robotic hand. - A noninvasive data collection glove for motion, force, and touch. - A simulator that turns weeks of experiments into minutes. GENE-26.5 is trained across language, vision, proprioception, tactile, and action. We designed a set of tasks to test how far we can go with this new paradigm. Fully autonomous, 1x speed, one model, same weights. (Enjoy with sound on) We are approaching the endgame for robotics. And this is just a beginning.
English
280
1.1K
5.8K
2.6M
Michael Cho - Rbt/Acc
Michael Cho - Rbt/Acc@micoolcho·
@chris_j_paxton @xiao_ted Exactly. It's not done until its done. And the cost of not getting things done in the physical world holds different meaning than that in digital worlds (ie. just respawn)
English
0
0
3
295
Chris Paxton
Chris Paxton@chris_j_paxton·
I really wish I had been there for this one with @xiao_ted -- full of insights on the insane trajectory robotics has been on over the last ~decade. What really stood out to me: The only reason something succeeds is if everything goes right. Behavior cloning, for example, seemed stuck at 60-70% success rate on key tasks until Ted's team rewrote their learning stack — at which point it hit 95-99%+ success rates. For most of those eight years, something was wrong. The stack wasn’t quite right, the learning algorithms were wrong, the data didn’t exist. Hardware and operations are not mature enough. But they kept working on these problems, over and over, until finally they have arrived at amazing breakthrough. There are probably lots of these little things that still need to be solved.
RoboPapers@RoboPapers

Robotics has changed dramatically over the last eight years. @xiao_ted has been involved in the cutting edge of robot learning through this period, spending those eight years at Google Brain/Google Deepmind. And he’s identified three eras of robot learning. These eras are: - The Era of Existence Proofs - trying different methods like QT-Opt, on-robot RL - The Era of Foundation Models - transitioning to data collection and clean objectives (i.e. supervised learning) - The Era of Scaling - orders of magnitude more data and larger models, enabling reasoning, long-horizon actions, and cross-embodiment transfer Watch Episode 78 of RoboPapers, with @micoolcho and @DJiafei to learn more!

English
4
19
159
33K
Michael Cho - Rbt/Acc
Michael Cho - Rbt/Acc@micoolcho·
Been looking forward to having @xiao_ted on the pod for a long time; awesome to hear his takes from the 8+yrs at Google/Deepmind. Tks for making the time for us, Ted 🙏
RoboPapers@RoboPapers

Robotics has changed dramatically over the last eight years. @xiao_ted has been involved in the cutting edge of robot learning through this period, spending those eight years at Google Brain/Google Deepmind. And he’s identified three eras of robot learning. These eras are: - The Era of Existence Proofs - trying different methods like QT-Opt, on-robot RL - The Era of Foundation Models - transitioning to data collection and clean objectives (i.e. supervised learning) - The Era of Scaling - orders of magnitude more data and larger models, enabling reasoning, long-horizon actions, and cross-embodiment transfer Watch Episode 78 of RoboPapers, with @micoolcho and @DJiafei to learn more!

English
3
4
19
2.3K