Sudharshan Suresh

288 posts

Sudharshan Suresh banner
Sudharshan Suresh

Sudharshan Suresh

@Suddhus

Research scientist & tech lead @BostonDynamics Atlas // Prev: @AIatMeta and PhD at @CMU_Robotics.

Cambridge, MA เข้าร่วม Nisan 2009
1.1K กำลังติดตาม1.1K ผู้ติดตาม
ทวีตที่ปักหมุด
Sudharshan Suresh
Sudharshan Suresh@Suddhus·
I'm a featured interview in our latest behind-the-scenes release! We break down the ML and perception that drives the whole-body manipulation behaviors from last year. It starts with a neat demo of Atlas's range-of-motion and our vision foundation models. youtu.be/oe1dke3Cf7I?si…
YouTube video
YouTube
English
6
7
54
6.1K
Sudharshan Suresh รีทวีตแล้ว
Carolina Higuera
Carolina Higuera@carohiguerarias·
Most world models for robot manipulation learn physics from pixels. But pixels don’t see it all. Can we ground these models in the "feeling" of contact to disambiguate visually identical states? Visuo-Tactile World Models (VT-WM): robot imagination in a shared space👇
English
3
16
98
9.2K
Sudharshan Suresh รีทวีตแล้ว
Chuer Pan
Chuer Pan@chuer_pan·
Check out UMI-FT, the newest member of the UMI family! 🐣 We integrated a coin-sized🪙, low-cost, 6-axis force-torque sensor behind each UMI Fin Ray finger. The sensor is physically compact, mechanically robust, and extremely easy-to-calibrate against ATI -- directly measures physically meaningful force/torque outputs (N/Nm). UMI-FT enables in-the-wild robot learning with force-aware manipulation data, without needing in-the-wild robots or relying on traditional, bulky, thousands-dollar F/T sensors! A cross-paradigm bonus: because the coinFT sensor outputs physically meaningful force/torque values (N/Nm), you can directly run standard admittance control to get F/T-aware compliant robot fingers -- no learning required, immediately useful! Checkout @Hojung_Choi_' post for deets!
Hojung Choi@Hojung_Choi_

Robots excel at learning motions from humans, but can they also learn to apply force safely? 💪 Introducing UMI-FT: the UMI gripper equipped with force/torque sensors (CoinFT) on each finger. Multimodal data from UMI-FT, combined with diffusion policy and compliance control, enables robots to apply sufficient yet safe force for task completion. UMI-FT Project website: umi-ft.github.io CoinFT Project website: coin-ft.github.io

English
1
3
40
3.1K
Sudharshan Suresh
Sudharshan Suresh@Suddhus·
It's been amazing to work with @HaozhiQ and I've learnt a lot from him, excited for his new lab!
Haozhi Qi@HaozhiQ

I will join UChicago CS @UChicagoCS as an Assistant Professor in late 2026, and I’m recruiting PhD students in this cycle (2025 - 2026). My research focuses on AI & Robotics - including dexterous manipulation, humanoids, tactile sensing, learning from human videos, robot systems, and anything needed to make robots truly work and improve everyday life. I also place strong emphasis on open-source. Check my homepage to learn more: haozhi.io. Please reachout if you are interested! The deadline is Dec 11th. Link: tinyurl.com/uchiapp.

English
0
1
4
2.1K
Sudharshan Suresh
Sudharshan Suresh@Suddhus·
@luisenp Really sorry to hear Luis, working with you was so meaningful for my PhD work. Hope you are doing alright!
English
1
0
14
5.4K
Luis Pineda
Luis Pineda@luisenp·
After 7 years at FAIR, I've been affected by the recent AI layoffs. If you are interested in robotics learning, let's chat :)
English
61
53
1K
135.7K
Gabe Margolis
Gabe Margolis@gabe_mrgl·
Excited to share SoftMimic -- a new approach for learning compliant humanoid policies that interact gently with the world.
English
14
108
626
64.1K
Homanga Bharadhwaj
Homanga Bharadhwaj@mangahomanga·
I'll be joining the faculty @JohnsHopkins late next year as a tenure-track assistant professor in @JHUCompSci Looking for PhD students to join me tackling fun problems in robot manipulation, learning from human data, understanding+predicting physical interactions, and beyond!
Homanga Bharadhwaj tweet mediaHomanga Bharadhwaj tweet mediaHomanga Bharadhwaj tweet media
English
87
113
854
130.7K
Sudharshan Suresh รีทวีตแล้ว
Ted Xiao
Ted Xiao@xiao_ted·
Same for ego data, UMI data, etc. An open secret is “we used ego4d” actually means filtering out the 1% of videos that are vaguely useful for learning. Occlusions, suboptimality, sensor noise, and so many pitfalls! Modeling + collect co-design is only gold standard currently.
English
1
4
50
8.6K
Sudharshan Suresh รีทวีตแล้ว
Kevin Zakka
Kevin Zakka@kevin_zakka·
I'm super excited to announce mjlab today! mjlab = Isaac Lab's APIs + best-in-class MuJoCo physics + massively parallel GPU acceleration Built directly on MuJoCo Warp with the abstractions you love.
English
32
138
866
91K
Jason Ma
Jason Ma@JasonMa2020·
We have raised $120M to accelerate our mission of building and delivering high-performance general-purpose robots to the physical world. Within one year, we have made research breakthroughs, showing that it is possible to achieve real-world reliability with large VLAs, and demonstrated commercial and deployment traction, with our DYNA-1 models running live in sites at SF, LA, and Sacramento. This is just the beginning, and I never felt more optimistic about a future where AI-powered robots can positively impact human productivity. From when I started PhD where robot policies barely worked even in highly controlled settings to now deploying DYNA robots with the confidence of out-of-box model performance, when I think about the trajectory of robotics, it’s astonishing how quickly we’ve gone from if it works in the lab to it works in the world. The next frontier isn’t about proving robots can move—it’s about proving they can reliably help in real-world environments, at scale, across industries. That’s what we’re building at @DynaRobotics The impact of this shift will be massive: - Unlocking productivity across logistics, manufacturing, and beyond. - Expanding what small teams and businesses can achieve. - Freeing humans to focus on higher-level creativity, problem-solving, and connection. The mission is bigger than any single deployment. It’s about ushering in an era where general-purpose robots are as ubiquitous and trusted as computers or smartphones. We’re just getting started, and I couldn’t be more excited for what’s ahead. Join us!🚀🤖
Dyna Robotics@DynaRobotics

Excited to announce that we have raised $120M in our Series A to advance the frontier of general-purpose high-performance robots. 🤖 The new funding will accelerate progress towards our mission of bringing foundation-model powered robots to everyone, everywhere. Read more 👇

English
42
31
427
68.1K
Sudharshan Suresh รีทวีตแล้ว
Lawrence Yunzhou Zhu
Lawrence Yunzhou Zhu@LawrenceZhu22·
Can we scale up mobile manipulation with egocentric human data? Meet EMMA: Egocentric Mobile MAnipulation EMMA learns from human mobile manipulation + static robot data — no mobile teleop needed! EMMA generalizes to new scenes and scales strongly with added human data. 1/9
English
10
63
416
79.1K
Sudharshan Suresh
Sudharshan Suresh@Suddhus·
Lucas and co. wrote a great blogpost on the careful science and engineering behind language-conditioned policies for whole-body manipulation! There's a lot more work on the horizon; our team is hiring researchers to scale egocentric human data and VLMs for robotics. Reach out!
Lucas Manuelli@lucas_manuelli

Today I’m proud to share what I’ve been working on recently with my team at @BostonDynamics along with our collaborators at @ToyotaResearch . bostondynamics.com/blog/large-beh…

English
0
0
11
852
Sudharshan Suresh รีทวีตแล้ว
Russ Tedrake
Russ Tedrake@RussTedrake·
TRI's latest Large Behavior Model (LBM) paper landed on arxiv last night! Check out our project website: toyotaresearchinstitute.github.io/lbm1/ One of our main goals for this paper was to put out a very careful and thorough study on the topic to help people understand the state of the technology, and to share a lot of details for how we're achieving it. youtube.com/watch?v=BEXFnr…
YouTube video
YouTube
English
8
102
486
86.8K
Sudharshan Suresh รีทวีตแล้ว
Siddhant Haldar
Siddhant Haldar@haldar_siddhant·
Current robot policies often face a tradeoff: they're either precise (but brittle) or generalizable (but imprecise). We present ViTaL, a framework that lets robots generalize precise, contact-rich manipulation skills across unseen environments with millimeter-level precision. 🧵
English
9
78
592
145.8K
Sudharshan Suresh รีทวีตแล้ว
Generalist
Generalist@GeneralistAI·
Today we're excited to share a glimpse of what we're building at Generalist. As a first step towards our mission of making general-purpose robots a reality, we're pushing the frontiers of what end-to-end AI models can achieve in the real world. Here's a preview of our early results in autonomous general-purpose dexterous capabilities – fast, reactive, smooth, precise, bi-manual coordinated sensorimotor control.
English
33
143
861
276.5K
Sudharshan Suresh รีทวีตแล้ว
Mandi Zhao
Mandi Zhao@ZhaoMandi·
How to learn dexterous manipulation for any robot hand from a single human demonstration? Check out DexMachina, our new RL algorithm that learns long-horizon, bimanual dexterous policies for a variety of dexterous hands, articulated objects, and complex motions.
English
21
94
622
119.1K