Leena Mathur

1K posts

Leena Mathur banner
Leena Mathur

Leena Mathur

@lmathur_

PhD student @SCSatCMU. I study multimodal social intelligence in AI systems across embodiments. prev research @GoogleDeepMind, @RobustAI, @USC, @Caltech, @EPFL

Bay Area and Pittsburgh 🇺🇸 Katılım Ağustos 2022
1.5K Takip Edilen1.5K Takipçiler
Leena Mathur retweetledi
Jesse Thomason
Jesse Thomason@_jessethomason_·
For prospective PhD students, I plan to hire in this coming application cycle (Fall 2026) with a focus on robotics, speech, and signed languages.
English
1
1
8
1.3K
Leena Mathur retweetledi
Shuyan Zhou
Shuyan Zhou@shuyanzh36·
In 2023, WebArena took 7 grad students more than 6 months to build just 5 environments with 812 variable browser-use tasks. Now, it takes under 10 hours and less than $100 per environment, with easy support for parallel generation. Excited to introduce WebArena-Infinity: a scalable approach for automatically generating high-authenticity, high-complexity browser environments with verifiable tasks suitable for RL training and benchmarking. Even strong open-source models that already achieve 60%+ success rates on WebArena and OSWorld complete fewer than 50% of tasks here. Project page: webarena.dev/webarena-infin… Repo: github.com/web-arena-x/we… 🧵 (1/n)
GIF
English
9
43
282
31.9K
Leena Mathur retweetledi
Leena Mathur retweetledi
Yash Jangir
Yash Jangir@off_jangir·
🤖 What would LMArena for robotics look like? Introducing RobotArena ∞ We turn real videos into simulated environments and evaluate robot policies at scale using VLM scoring + human preferences A scalable benchmark for robot generalists 🔗 robotarenainf.github.io Details 🧵👇
English
4
27
123
20.3K
Leena Mathur retweetledi
Seungwook Han
Seungwook Han@seungwookh·
Can language models learn useful priors without ever seeing language? We pre-pre-train transformers on neural cellular automata — fully synthetic, zero language. This improves language modeling by up to 6%, speeds up convergence by 40%, and strengthens downstream reasoning. Surprisingly, it even beats pre-pre-training on natural text! Blog: hanseungwook.github.io/blog/nca-pre-p… (1/n)
Seungwook Han tweet media
English
48
259
1.7K
241.5K
Leena Mathur retweetledi
Quanting Xie
Quanting Xie@DanielXieee·
The full video of hand and glove co-design in action. For hand 🖐️ we try to make sure it's able to repeat what a human hand can do; hyperextension in the index finger is a great example here. Most robotic hands lack that, and it is very important for grasping objects securely. Let’s try to close the embodiment gap. For glove 🧤 we try to make sure it’s comfortable to wear, while also has same kinematics, contacts with the robot hand.
Y Combinator@ycombinator

Origami Robotics is building high-DOF robotic hands with in-joint motors and a co-designed data-collection glove to eliminate the embodiment gap by collecting high-quality, real-world data at scale. Congrats on the launch, @DanielXieee and @QuanliangX! ycombinator.com/launches/Pcl-o…

English
6
27
184
20.4K
Leena Mathur retweetledi
Leena Mathur retweetledi
Y Combinator
Y Combinator@ycombinator·
Origami Robotics is building high-DOF robotic hands with in-joint motors and a co-designed data-collection glove to eliminate the embodiment gap by collecting high-quality, real-world data at scale. Congrats on the launch, @DanielXieee and @QuanliangX! ycombinator.com/launches/Pcl-o…
English
28
43
296
83.4K
Leena Mathur retweetledi
Yun (Catherine) Cheng
Yun (Catherine) Cheng@chengyun01·
Humans anchor on the first piece of information they receive. Do reasoning models escape this bias? We uncover Contextual Drag: errors in context bias subsequent reasoning toward similar mistakes. It persists even if the error has been recognized via reasoning.
Yun (Catherine) Cheng tweet media
English
1
5
40
7.2K
Leena Mathur retweetledi
Sarah Catanzaro
Sarah Catanzaro@sarahcat21·
I strongly believe that mid-training will become even more popular/common as companies leverage their proprietary data to advance strong enough OSS base models. I also suspect that training mixtures will matter alot.
Emmy Liu@_emliu

Midtraining is a new part of many training pipelines, but when does it help and can it backfire? 🤔 In our new preprint, we use controlled experiments to pin this down. TL;DR; midtraining helps the most when it “bridges” pretraining and posttraining, and mitigates forgetting after posttraining. Timing is also very important. 🧵

English
3
8
94
14.3K
Leena Mathur retweetledi
Karl Pertsch
Karl Pertsch@KarlPertsch·
This one has been a long time coming: today we’re introducing MEM, an approach for giving VLAs short-term and long-term memory. Memory is such an obvious capability, but adding it isn’t easy (most VLAs today are memory-less). A short thread on challenges, solutions, and the new capabilities MEM unlocks for us.
English
7
10
109
8.8K
Leena Mathur retweetledi
Hal Daumé III
Hal Daumé III@haldaume3·
Come join @trails_ai as a postdoc at @UofMaryland (and work w folks at GW, MSU & Cornell) to conduct research and scholarship focused on approaches to AI that advance trust and trustworthiness with a great group of colleagues! 🌐 go.umd.edu/trails-postdoc… 🗓️ Summer/Fall 2026 start
English
1
18
56
10.2K
Leena Mathur retweetledi
Bernt Bornich
Bernt Bornich@BerntBornich·
These guys get it (equally true for rest of robot, for safety and ability to learn thru failure, not just sim2real gap) Adding NEOs hands: DOF: 22 (44 active tendons per hand, fully actuated) Ratio: 8:1 (w/tendons, 1X custom high-torque motors) Sim2Real Gap: Low (10-15% friction, high stiffness) Force Transparency: High (motor currents) Reliability: High (3.5m cycles at nominal load)
Bernt Bornich tweet media
Quanting Xie@DanielXieee

Why does manipulation lag so far behind locomotion? New post on one piece we don't talk about enough: The gearbox. The Gap You've probably seen those dancing humanoid robots from Chinese New Year. Locomotion isn't entirely solved; but clearly it's on a trajectory. But we haven't seen anything close for manipulation. 𝗪𝗵𝘆? When sim-to-real transfer fails, the instinct is to blame the algorithm. Train bigger networks. Crank up domain randomization. Those approaches have made real progress; we don't deny that. But we started wondering: are we treating the symptom or the disease? The Hardware Bottleneck: Fingers are too small for powerful motors. So most hands use massive gearboxes (200:1, 288:1) to get enough torque. But those gearboxes break everything manipulation needs:   • Stiction and backlash are complex to simulate. Policies trained on smooth physics hallucinate when they hit that reality.   • Reflected inertia scales as N². At large gear ratio, the finger hits with sledgehammer momentum.   • Friction blocks force information. The hand becomes blind. And they're the first thing to break. What we are trying to build at Origami, we cut the gear ratio from 288:1 to 15:1 using axial flux motors and thermal optimization. The transmission becomes more transparent: backdrivable, low friction, forces propagate to motor current. Early signs are encouraging. Still running quantitative benchmarks. Why Interactive? I love how Science Center uses interactive devices to explain complex ideas. I want to borrow this concept and help people understand the hard problems in robotics better visually. The post has demos where you can toggle friction, slide gear ratios, watch the sim-to-real gap widen in real-time. What's inside:   • Interactive demos (friction curves, N² scaling, contact patterns)   • Comparison table: 14 robot hands by sim-to-real gap and force transparency   • The math behind why low-ratio matters Read it here: origami-robotics.com/blog/dexterity… We're not claiming we've solved dexterity. The deadlock has many pieces. But we think this one's foundational. Curious what you think.

English
11
25
223
60.8K
Leena Mathur retweetledi
Carnegie Mellon University
Carnegie Mellon University@CarnegieMellon·
CMU on Friday marked the official opening of the Robotics Innovation Center —a one-of-a-kind facility anchoring the university’s next chapter in developing a world-leading collaborative ecosystem for robotics, automation and #AI breakthroughs. Read more: cmu.is/RIC-opening
Carnegie Mellon University tweet mediaCarnegie Mellon University tweet mediaCarnegie Mellon University tweet mediaCarnegie Mellon University tweet media
English
0
11
30
3.1K
Leena Mathur retweetledi
Carnegie Mellon University
Carnegie Mellon University@CarnegieMellon·
Building on a legacy of industry and spurring a new wave of innovation, today marks the official opening of the CMU Robotics Innovation Center. The center will empower CMU’s world-leading robotics ecosystem. 📸Perkins Eastman. Photography by Andrew Rugge
Carnegie Mellon University tweet mediaCarnegie Mellon University tweet mediaCarnegie Mellon University tweet media
English
1
7
28
2.9K
Leena Mathur retweetledi
Tinker
Tinker@tinkerapi·
To support open and collaborative science, we offer Tinker grants for researchers advancing the field. This week we’re featuring publications by some of our early research grant recipients! thinkingmachines.ai/blog/tinker-re…
English
3
39
258
152.7K
Leena Mathur retweetledi
Quanting Xie
Quanting Xie@DanielXieee·
Why does manipulation lag so far behind locomotion? New post on one piece we don't talk about enough: The gearbox. The Gap You've probably seen those dancing humanoid robots from Chinese New Year. Locomotion isn't entirely solved; but clearly it's on a trajectory. But we haven't seen anything close for manipulation. 𝗪𝗵𝘆? When sim-to-real transfer fails, the instinct is to blame the algorithm. Train bigger networks. Crank up domain randomization. Those approaches have made real progress; we don't deny that. But we started wondering: are we treating the symptom or the disease? The Hardware Bottleneck: Fingers are too small for powerful motors. So most hands use massive gearboxes (200:1, 288:1) to get enough torque. But those gearboxes break everything manipulation needs:   • Stiction and backlash are complex to simulate. Policies trained on smooth physics hallucinate when they hit that reality.   • Reflected inertia scales as N². At large gear ratio, the finger hits with sledgehammer momentum.   • Friction blocks force information. The hand becomes blind. And they're the first thing to break. What we are trying to build at Origami, we cut the gear ratio from 288:1 to 15:1 using axial flux motors and thermal optimization. The transmission becomes more transparent: backdrivable, low friction, forces propagate to motor current. Early signs are encouraging. Still running quantitative benchmarks. Why Interactive? I love how Science Center uses interactive devices to explain complex ideas. I want to borrow this concept and help people understand the hard problems in robotics better visually. The post has demos where you can toggle friction, slide gear ratios, watch the sim-to-real gap widen in real-time. What's inside:   • Interactive demos (friction curves, N² scaling, contact patterns)   • Comparison table: 14 robot hands by sim-to-real gap and force transparency   • The math behind why low-ratio matters Read it here: origami-robotics.com/blog/dexterity… We're not claiming we've solved dexterity. The deadlock has many pieces. But we think this one's foundational. Curious what you think.
Quanting Xie tweet media
English
24
100
656
143.5K
Leena Mathur retweetledi
Quanting Xie
Quanting Xie@DanielXieee·
Yesterday I gave a talk and a live demo at the Physical AI meetup to represent Origami Robotics. It was incredible to meet about 200 people who are enthusiastic about physical AI, and I’m happy to say that nobody fell asleep! It was a great honor to share the stage with founders from the hottest robotics startups right now: @tonyzzhao from @sundayrobotics, @kaandogrusoz from @weaverobotics , and @JackMonas from @1X What do you all want to see a live demo of next time? Leave your comments below!
Quanting Xie tweet mediaQuanting Xie tweet mediaQuanting Xie tweet media
English
2
5
45
2.6K
Leena Mathur
Leena Mathur@lmathur_·
Join us at the Interactive Physical AI workshop at #CVPR2026, 📢 the paper deadline is on February 28!
NVIDIA AI Developer@NVIDIAAIDev

🎉 Announcing the first Interactive Physical AI Workshop at #CVPR2026. Join us for a half-day workshop exploring AI systems that see, communicate, and act safely in our shared physical world — including robots, environment-aware avatars (e.g., AR telepresence), and on-device multimodal agents. ✅ Cross-disciplinary topics spanning vision, robotics, and multimodal AI ✅ Featuring invited speakers (incl. Yaser Sheikh @subail), poster sessions, and spotlight talks 📅 Paper deadline is Feb 28: openreview.net/group?id=thecv… More info: research.nvidia.com/labs/amri/proj… 🙌 Organized by: @swookpark, @amritamaz, @mct1224, @lmathur_, @luminohope and @shalinidemello 💡 Sponsored by NVIDIA. We look forward to seeing you at CVPR.

English
0
0
1
254