Yishu Li

43 posts

Yishu Li banner
Yishu Li

Yishu Li

@LisaYishu

MSR @CMU_Robotics, Prev CS Undergrad @Tsinghua_Uni

Pittsburgh, PA Katılım Ekim 2015
248 Takip Edilen158 Takipçiler
Sabitlenmiş Tweet
Yishu Li
Yishu Li@LisaYishu·
A closed door looks the same whether it pushes or pulls. Two identical-looking boxes might have different center of mass. How should robots act when a single visual observation isn't enough? Introducing HAVE 🤖, our method that reasons about past interactions online! #CORL2025
Yishu Li tweet media
English
1
19
42
7.5K
Yishu Li retweetledi
Yixuan Huang
Yixuan Huang@YixuanHuang13·
Meet KinDER — a stress test for robot physical reasoning. All 13 methods failed 😈 🌎 25 environments ♾️ Infinite tasks 🏋️ Gymnasium API ⚒️ Over 20 parameterized skills 🪧 Human demonstrations 📊 13 baselines (planning and learning) From @Princeton @CMU_Robotics @ICatGT @CambridgeMLG @nvidia @MIT_CSAIL 🧵 1/n
English
1
25
130
31.2K
Haoquan Fang
Haoquan Fang@hq_fang·
I’m excited to share that I’ve decided to join @Stanford @StanfordSVL as a CS PhD student, advised by @drfeifei! I feel very fortunate for all the opportunities I’ve had so far, and I’m genuinely thrilled for this next chapter. I’m eager to dive deeper into robot learning in such an inspiring environment, and to continue developing as a researcher alongside people I deeply admire. I want to sincerely thank @RanjayKrishna, Ali Farhadi, @JenqH, @DJiafei, and everyone who has guided, encouraged, and believed in me along the way. I’m also especially grateful to @uwcse and @allen_ai for providing such a wonderful community and so many meaningful opportunities. I also truly appreciate the time and support from @drfeifei, @jiajunwu_cs, @RuohanZhang76, @ManlingLi_, @wenlong_huang, @YunfanJiang, @wensi_ai, and many others throughout both my application and decision process. I’m really looking forward to learning from and working with you all at Stanford! Stay tuned for more exciting updates!
English
31
10
400
31.7K
Yishu Li
Yishu Li@LisaYishu·
Great work, Yuanchen!
Yuanchen Ju@ju_yuanchen

🤖For embodied agents in household environments, we tackle two fundamental questions: 1️⃣ What is the optimal scene representation? 2️⃣ Can a VLM leveraging this representation actually improve spatial understanding and task planning? Introducing MomaGraph: State-Aware Unified Scene Graphs with Vision-Language Models for Embodied Task Planning. 👉: hybridrobotics.github.io/MomaGraph/ and 🔗:arxiv.org/abs/2512.16909 Key Ideas: MomaGraph jointly models spatial AND functional relationships with part-level interactive nodes. MomaGraph is designed to be: ✅ Task-Relevant: Filters visual noise to keep only what matters for the instruction. ✅ Dynamic & State-Aware: MomaGraph adapts. 🔄 It explicitly models object states and dynamic changes in the environment. We built MomaGraph to bridge the gap between the Spatial VLM and Robotics communities. 🌉 Our hope is that this work serves as a foundation for the next generation of intelligent, adaptive embodied agents. 🦾✨Questions and feedback welcome. 🚀 #Robotics #EmbodiedAI #CV #LLM #SceneGraph

English
1
2
5
1.4K
Yishu Li retweetledi
Li Yi
Li Yi@ericyi0124·
Thrilled to share one of my favorite works this year: DexNDM! We bridge the Sim2Real gap for dexterous in-hand rotation, achieving a true "0-to-1" advancement. The key? DexNDM learns from biased, real-world data without needing any successful demonstrations. Now a general-purpose dexterous hand can stably rotate large books, long rods, & complex objects around any axis, from any wrist pose. This powerful primitive enables complex, long-horizon tasks like teleoperated screwing and furniture assembly. 📄 Paper: arxiv.org/abs/2510.08556 🌐 Project Page: meowuu7.github.io/DexNDM/
English
5
50
247
38.3K
Yishu Li retweetledi
Alexis Hao
Alexis Hao@hao_alexis·
Introducing FMVP: a method that adapts to natural arm motions during robot-assisted dressing. Pre-trained on vision in sim, fine-tuned with limited real-world vision+force data, and tested in a 12-user, 264-trial study, FMVP is robust across garments and motions. #CoRL2025
English
1
15
37
5.2K
Yishu Li retweetledi
Divyam Goel
Divyam Goel@divyamgo10·
How do we discover a robot's failure modes before deploying it in the real world? Standard benchmarks often don't capture the full picture, leaving policies vulnerable to plausible variations in object shape. Thrilled that our work, "Geometric Red-Teaming for Robotic Manipulation," has been accepted as an oral presentation at #CoRL2025! We introduce a framework to automatically find these geometric blindspots. georedteam.github.io 🧵
English
3
14
45
7.8K
Yishu Li
Yishu Li@LisaYishu·
We analyzed how the number of action proposals provided to the verifier affects performance. The method is sample efficient, as performance improves significantly when the verifier is given just 5 samples. This demonstrates the efficiency of using a verifier for action selection.
Yishu Li tweet media
English
1
0
6
201
Yishu Li
Yishu Li@LisaYishu·
A closed door looks the same whether it pushes or pulls. Two identical-looking boxes might have different center of mass. How should robots act when a single visual observation isn't enough? Introducing HAVE 🤖, our method that reasons about past interactions online! #CORL2025
Yishu Li tweet media
English
1
19
42
7.5K
Yishu Li retweetledi
Carlota Parés-Morlans
Carlota Parés-Morlans@carlotapares·
🔍 How can we build AI agents that reason about the physical world the way humans do (or better) ? Excited to share Causal-PIK: Causality-based Physical Reasoning with a Physics-Informed Kernel, which will be presented next Thursday July 17th at ICML in Vancouver! 👇(1/6)
English
16
33
159
24.5K
Yishu Li retweetledi
Yuanliang Ju
Yuanliang Ju@AveryJuuu0213·
Thrilled to share our new paper SAFE🤖! A robust failure detector for Vision-Language-Action models! This was my first time working on real-world robot experiments, and many moments over the past few months have made me excited for my robotics journey😻😻
Qiao Gu@qiaogu1997

🚀 Excited to introduce SAFE, our work on multitask failure detection for Vision-Language-Action (VLA) models! 🔍 SAFE is a simple yet powerful detector that leans from VLAs’ semantic-rich internal feature space and outputs a scalar score indicating the likelihood of task failure

Toronto, Ontario 🇨🇦 English
1
8
16
3.4K
Yishu Li retweetledi
Yufei Wang
Yufei Wang@YufeiWang25·
Introducing ArticuBot🤖at #RSS2025, in which we learn a single policy for manipulating diverse articulated objects across 3 robot embodiments in different labs, kitchens & lounges, achieved via large-scale simulation and hierarchical imitation learning. articubot.github.io 🧵
English
3
31
89
6.9K