Stanford IPRL Lab

87 posts

Stanford IPRL Lab banner
Stanford IPRL Lab

Stanford IPRL Lab

@StanfordIPRL

Stanford Interactive Perception and Robot Learning Lab. Directed by Jeannette Bohg @leto__jean

Stanford Katılım Mayıs 2019
52 Takip Edilen1.7K Takipçiler
Stanford IPRL Lab
Stanford IPRL Lab@StanfordIPRL·
In our latest work, we present a sim2real policy that can manipulate a wide range of tools in particularly difficult ways! Led by @kushalk_ and @tylerlum23 Check out the paper & a detailed thread below:
Kushal@kushalk_

🤖 Can a single robot policy manipulate diverse tools without ever seeing them before? Introducing SimToolReal 🔨 : a generalist dexterous manipulation policy that transfers zero-shot sim→real to unseen tools + unseen tasks All videos are 1x speed (60 Hz control) 🧵👇

English
0
4
24
2K
Stanford IPRL Lab
Stanford IPRL Lab@StanfordIPRL·
Congratulations to our amazing advisor @leto__jean for receiving tenure! 🎉🎊 We’re all so grateful to be able to work with her! We celebrated her incredible achievement with lab alumni at a surprise party this weekend :)
Stanford IPRL Lab tweet media
English
3
9
113
18K
Stanford IPRL Lab retweetledi
Jeannette Bohg
Jeannette Bohg@leto__jean·
Mobile manipulators encounter very interesting and diverse scenarios But this comes with challenges 🧐 📚 more diversity needs more data 🦾 more degrees of freedom make teleop harder HoMeR combines a whole-body controller with a hybrid policy to easily train mobile manipulators
Priya Sundaresan@priyasun_

How can we move beyond static-arm lab setups and learn robot policies in our messy homes? We introduce HoMeR, an imitation learning agent for in-the-wild mobile manipulation. 🧵1/8

English
1
3
42
2.7K
Stanford IPRL Lab retweetledi
Priya Sundaresan
Priya Sundaresan@priyasun_·
How can we move beyond static-arm lab setups and learn robot policies in our messy homes? We introduce HoMeR, an imitation learning agent for in-the-wild mobile manipulation. 🧵1/8
English
6
49
246
34.5K
Stanford IPRL Lab retweetledi
Jingyun Yang
Jingyun Yang@yjy0625·
Introducing Mobi-π: Mobilizing Your Robot Learning Policy. Our method: ✈️ enables flexible mobile skill chaining 🪶 without requiring additional policy training data 🏠 while scaling to unseen scenes 🧵↓
English
7
80
319
58.5K
Stanford IPRL Lab retweetledi
Jeannette Bohg
Jeannette Bohg@leto__jean·
Enabling robots to learn from humans would be a game changer! But humans may manipulate objects in a way that is impossible for robots. To cross this embodiment gap, Human2Sim2Robot follows this key insight: Don't imitate a demonstration but instead use it to guide RL.
Tyler Lum@tylerlum23

🧑🤖 Introducing Human2Sim2Robot!  💪🦾 Learn robust dexterous manipulation policies from just one human RGB-D video. Our Real→Sim→Real framework crosses the human-robot embodiment gap using RL in simulation. #Robotics #DexterousManipulation #Sim2Real 🧵1/7

English
3
10
45
5.2K
Stanford IPRL Lab retweetledi
Tyler Lum
Tyler Lum@tylerlum23·
🧑🤖 Introducing Human2Sim2Robot!  💪🦾 Learn robust dexterous manipulation policies from just one human RGB-D video. Our Real→Sim→Real framework crosses the human-robot embodiment gap using RL in simulation. #Robotics #DexterousManipulation #Sim2Real 🧵1/7
English
6
50
259
36.5K
Stanford IPRL Lab retweetledi
Olivia Lee
Olivia Lee@olivia_y_lee·
Excited to share our recent work! Our framework crosses the human-robot embodiment gap for dexterous manipulation. With just one human video demo to guide RL in sim, the robot learns optimal strategies for its own embodiment instead of imitating human motion. Details in the 🧵
Tyler Lum@tylerlum23

🧑🤖 Introducing Human2Sim2Robot!  💪🦾 Learn robust dexterous manipulation policies from just one human RGB-D video. Our Real→Sim→Real framework crosses the human-robot embodiment gap using RL in simulation. #Robotics #DexterousManipulation #Sim2Real 🧵1/7

English
0
1
18
1.6K
Stanford IPRL Lab retweetledi
Jeannette Bohg
Jeannette Bohg@leto__jean·
We are scaling up robot data collection WITHOUT robots, wearables or motion capture. You just need your own human hand🖐️, arm 💪and an RGB-D camera 📷 Essentially, we exchange the human with a robot in our video training data. 👇Check @marionlepert thread for the details.
Marion Lepert@marionlepert

Introducing Phantom 👻: a method to train robot policies without collecting any robot data — using only human video demonstrations. Phantom turns human videos into "robot" demonstrations, making it significantly easier to scale up and diversify robotics data. 🧵1/9

English
1
4
66
4.7K
Stanford IPRL Lab retweetledi
Jeannette Bohg
Jeannette Bohg@leto__jean·
Contact-rich, dexterous manipulation benefits from tactile sensing in two ways: 🥇 By providing force-informed actions in human demonstrations, and 🥈as input to learned manipulation policies. In DexForce 🦾, @clairelchen shows that especially Point 🥇 is crucial.
Claire Chen@clairelchen

Contact-rich dexterous manipulation, like opening an AirPods case or unscrewing a nut, requires a robot to apply the right forces at the right moments. Our new work DexForce leverages force sensing to get actions that enable robot hands to perform these contact-rich tasks. (1/6)

English
0
6
33
3.9K
Stanford IPRL Lab retweetledi
Claire Chen
Claire Chen@clairelchen·
Contact-rich dexterous manipulation, like opening an AirPods case or unscrewing a nut, requires a robot to apply the right forces at the right moments. Our new work DexForce leverages force sensing to get actions that enable robot hands to perform these contact-rich tasks. (1/6)
English
3
14
59
9.8K
Stanford IPRL Lab retweetledi
Jeannette Bohg
Jeannette Bohg@leto__jean·
We are releasing a big, dexterous grasping data set today: 3.5M grasps annotated with success labels and perceptual data. With this dataset we demonstrate the power of grasp evaluators that take a grasp as input and return its quality. #Corl2024 For details see Albert's 🧵
Albert Li@albert_h_li

There have been many recent big grasping datasets, but few demos of real-world grasping using generative models. How do we achieve this? Introducing: Get a Grip (#corl2024)! We show that instead of generative models, discriminative models can attain sim2real transfer! 👀🧵👇

English
0
6
63
10.3K
Stanford IPRL Lab retweetledi
Jeannette Bohg
Jeannette Bohg@leto__jean·
This was a really fun collaboration with the @PortalCornell lab! Apricot can perform tasks according to user preferences while adhering to physical constraints. It achieves this by combining LLMs with Bayesian active learning. Check out @sanjibac 🧵
Sanjiban Choudhury@sanjibac

How can we enable LLMs to actively clarify ambiguous task specifications by gathering information from humans? Check out APRICOT at #CoRL2024! APRICOT combines LLMs, which propose diverse questions, with Bayesian Active Learning, which selects the most informative one to ask.

English
0
2
17
1.8K
Stanford IPRL Lab retweetledi
Jeannette Bohg
Jeannette Bohg@leto__jean·
Cross-Embodiment transfer is integral to scaling robot learning. Shadow 🌘 zero-shot transfers policies to unseen robots. We show transfer to new robot arms but also to new robot grippers which is much harder. Check @marionlepert's thread for details! Talk to us at #Corl2024
Marion Lepert@marionlepert

Introducing Shadow: a cross-embodiment policy transfer method for robotics. Shadow enables training a policy on one robot and successfully deploying it on a different, unseen robot, with no extra data required! 🦾🤖 To be presented at #Corl2024 (1/6)

English
0
7
49
11K
Stanford IPRL Lab retweetledi
Marion Lepert
Marion Lepert@marionlepert·
Introducing Shadow: a cross-embodiment policy transfer method for robotics. Shadow enables training a policy on one robot and successfully deploying it on a different, unseen robot, with no extra data required! 🦾🤖 To be presented at #Corl2024 (1/6)
English
3
12
72
18.6K
Stanford IPRL Lab retweetledi
Jeannette Bohg
Jeannette Bohg@leto__jean·
We dramatically sped up Diffusion policies through consistency distillation. With the resulting single step policy, we can run fast inference on laptop GPUs and robot on-board compute. 👇
Aaditya Prasad 🇺🇸@_Aaditya_Prasad

Diffusion Policies are powerful and widely used. We made them much faster. Consistency Policy bridges consistency distillation techniques to the robotics domain and enables 10-100x faster policy inference with comparable performance. Accepted at #RSS2024

English
1
14
105
28.9K
Stanford IPRL Lab retweetledi
Jeannette Bohg
Jeannette Bohg@leto__jean·
It is difficult for robots to retrieve objects in densely cluttered environments. We propose tactile informed action primitives to mitigate jamming in dense clutter. Today, Wed at #ICRA2024 Session: Force and Tactile Sensing IV Room: AX - F204 Time: 10:30-12:00 🧵
English
1
3
19
2.5K