
Kallol Saha
263 posts

Kallol Saha
@_ksaha
MSR Student @ CMU RI. I work on hybrid learning-and-planning methods for long-horizon tasks in robotics and beyond. Previously RA @ RRC, IIITH


Meet KinDER — a stress test for robot physical reasoning. All 13 methods failed 😈 🌎 25 environments ♾️ Infinite tasks 🏋️ Gymnasium API ⚒️ Over 20 parameterized skills 🪧 Human demonstrations 📊 13 baselines (planning and learning) From @Princeton @CMU_Robotics @ICatGT @CambridgeMLG @nvidia @MIT_CSAIL 🧵 1/n


Long full shot. 2x, teleoperated. en-route to autonomy, It was fun putting Eggie in a messy, cluttered scene. Clean rooms be boring.

Thought I heard you say I am cute?


Researchers at CMU’s Robotics Institute have developed a new system that helps robots operate effectively in cluttered, unpredictable environments like kitchens, classrooms and offices — a huge step toward making robots more capable in everyday settings. ri.cmu.edu/new-system-bri…

Incredible news. Neural MP has won the Best Student Paper award at IROS 2025!! Congratulations to @mihdalal & @Jiahui_Yang6709 for leading the project along with @mendonca_rl, youssef, @rsalakhu. Neural MP is a major step in making motion planning end-to-end, fast & reactive.

🚨Introducing SPOT: Search over Point Cloud Object Transformations. SPOT is a combined learning-and-planning approach that searches in the space of object transformations. Website: planning-from-point-clouds.github.io Paper: arxiv.org/abs/2509.04645 Code: github.com/kallol-saha/SP…

🚨Introducing SPOT: Search over Point Cloud Object Transformations. SPOT is a combined learning-and-planning approach that searches in the space of object transformations. Website: planning-from-point-clouds.github.io Paper: arxiv.org/abs/2509.04645 Code: github.com/kallol-saha/SP…

Introducing FMVP: a method that adapts to natural arm motions during robot-assisted dressing. Pre-trained on vision in sim, fine-tuned with limited real-world vision+force data, and tested in a 12-user, 264-trial study, FMVP is robust across garments and motions. #CoRL2025


How do we discover a robot's failure modes before deploying it in the real world? Standard benchmarks often don't capture the full picture, leaving policies vulnerable to plausible variations in object shape. Thrilled that our work, "Geometric Red-Teaming for Robotic Manipulation," has been accepted as an oral presentation at #CoRL2025! We introduce a framework to automatically find these geometric blindspots. georedteam.github.io 🧵

🚨Introducing SPOT: Search over Point Cloud Object Transformations. SPOT is a combined learning-and-planning approach that searches in the space of object transformations. Website: planning-from-point-clouds.github.io Paper: arxiv.org/abs/2509.04645 Code: github.com/kallol-saha/SP…

A closed door looks the same whether it pushes or pulls. Two identical-looking boxes might have different center of mass. How should robots act when a single visual observation isn't enough? Introducing HAVE 🤖, our method that reasons about past interactions online! #CORL2025







