

Leena Mathur
1K posts

@lmathur_
PhD student @SCSatCMU. I study multimodal social intelligence in AI systems across embodiments. prev research @GoogleDeepMind, @RobustAI, @USC, @Caltech, @EPFL




🤖 What would LMArena for robotics look like? Introducing RobotArena ∞ We turn real videos into simulated environments and evaluate robot policies at scale using VLM scoring + human preferences A scalable benchmark for robot generalists 🔗 robotarenainf.github.io Details 🧵👇




Origami Robotics is building high-DOF robotic hands with in-joint motors and a co-designed data-collection glove to eliminate the embodiment gap by collecting high-quality, real-world data at scale. Congrats on the launch, @DanielXieee and @QuanliangX! ycombinator.com/launches/Pcl-o…

Origami Robotics is building high-DOF robotic hands with in-joint motors and a co-designed data-collection glove to eliminate the embodiment gap by collecting high-quality, real-world data at scale. Congrats on the launch, @DanielXieee and @QuanliangX! ycombinator.com/launches/Pcl-o…


Origami Robotics is building high-DOF robotic hands with in-joint motors and a co-designed data-collection glove to eliminate the embodiment gap by collecting high-quality, real-world data at scale. Congrats on the launch, @DanielXieee and @QuanliangX! ycombinator.com/launches/Pcl-o…

Midtraining is a new part of many training pipelines, but when does it help and can it backfire? 🤔 In our new preprint, we use controlled experiments to pin this down. TL;DR; midtraining helps the most when it “bridges” pretraining and posttraining, and mitigates forgetting after posttraining. Timing is also very important. 🧵




Why does manipulation lag so far behind locomotion? New post on one piece we don't talk about enough: The gearbox. The Gap You've probably seen those dancing humanoid robots from Chinese New Year. Locomotion isn't entirely solved; but clearly it's on a trajectory. But we haven't seen anything close for manipulation. 𝗪𝗵𝘆? When sim-to-real transfer fails, the instinct is to blame the algorithm. Train bigger networks. Crank up domain randomization. Those approaches have made real progress; we don't deny that. But we started wondering: are we treating the symptom or the disease? The Hardware Bottleneck: Fingers are too small for powerful motors. So most hands use massive gearboxes (200:1, 288:1) to get enough torque. But those gearboxes break everything manipulation needs: • Stiction and backlash are complex to simulate. Policies trained on smooth physics hallucinate when they hit that reality. • Reflected inertia scales as N². At large gear ratio, the finger hits with sledgehammer momentum. • Friction blocks force information. The hand becomes blind. And they're the first thing to break. What we are trying to build at Origami, we cut the gear ratio from 288:1 to 15:1 using axial flux motors and thermal optimization. The transmission becomes more transparent: backdrivable, low friction, forces propagate to motor current. Early signs are encouraging. Still running quantitative benchmarks. Why Interactive? I love how Science Center uses interactive devices to explain complex ideas. I want to borrow this concept and help people understand the hard problems in robotics better visually. The post has demos where you can toggle friction, slide gear ratios, watch the sim-to-real gap widen in real-time. What's inside: • Interactive demos (friction curves, N² scaling, contact patterns) • Comparison table: 14 robot hands by sim-to-real gap and force transparency • The math behind why low-ratio matters Read it here: origami-robotics.com/blog/dexterity… We're not claiming we've solved dexterity. The deadlock has many pieces. But we think this one's foundational. Curious what you think.













🎉 Announcing the first Interactive Physical AI Workshop at #CVPR2026. Join us for a half-day workshop exploring AI systems that see, communicate, and act safely in our shared physical world — including robots, environment-aware avatars (e.g., AR telepresence), and on-device multimodal agents. ✅ Cross-disciplinary topics spanning vision, robotics, and multimodal AI ✅ Featuring invited speakers (incl. Yaser Sheikh @subail), poster sessions, and spotlight talks 📅 Paper deadline is Feb 28: openreview.net/group?id=thecv… More info: research.nvidia.com/labs/amri/proj… 🙌 Organized by: @swookpark, @amritamaz, @mct1224, @lmathur_, @luminohope and @shalinidemello 💡 Sponsored by NVIDIA. We look forward to seeing you at CVPR.