Simon LC

88 posts

Simon LC banner
Simon LC

Simon LC

@simonlc_

research scientist @ the AI institute robotics & optimization phd @ stanford

New York Beigetreten Eylül 2019
229 Folgt585 Follower
Angehefteter Tweet
Simon LC
Simon LC@simonlc_·
Excited to share what I've been working over the last few months! We taught Spot to lift, roll, and stack tires completely autonomously. It uses its arms, body, and legs to manipulate heavy objects at speed.
English
1
0
3
204
Simon LC
Simon LC@simonlc_·
We also show RL-based tire uprighting, where the tire literally flies into the air. This illustrates just how dynamic whole-body manipulation can be.
English
1
0
0
78
Simon LC
Simon LC@simonlc_·
Excited to share what I've been working over the last few months! We taught Spot to lift, roll, and stack tires completely autonomously. It uses its arms, body, and legs to manipulate heavy objects at speed.
English
1
0
3
204
Simon LC retweetet
Kuan Fang
Kuan Fang@KuanFang·
Our new paper shows how task representations learned via temporal alignment enable compositional generalization for conditional policies. This allows robots to solve compound tasks by implicitly decomposing them into subtasks.
Vivek Myers@vivek_myers

Current robot learning methods are good at imitating tasks seen during training, but struggle to compose behaviors in new ways. When training imitation policies, we found something surprising—using temporally-aligned task representations enabled compositional generalization. 1/

English
1
3
17
3.4K
Simon LC retweetet
Xuanlin Li (Simon)
Xuanlin Li (Simon)@XuanlinLi2·
Learning bimanual, contact-rich robot manipulation policies that generalize over diverse objects has long been a challenge. Excited to share our work: Planning-Guided Diffusion Policy Learning for Generalizable Contact-Rich Bimanual Manipulation! glide-manip.github.io 🧵1/n
English
1
19
74
20.9K
Simon LC retweetet
Preston Culbertson
Preston Culbertson@pdculbert·
ICYMI: For #CoRL2024 we released a dataset of 3.5M (!) dexterous grasps, with multi-trial labels and perceptual data for 4.3k objects. Our takeaways: scale matters, and refining grasps > better sampling. Hoping our data can enable more vision-based grasps in hardware!
Albert Li@albert_h_li

There have been many recent big grasping datasets, but few demos of real-world grasping using generative models. How do we achieve this? Introducing: Get a Grip (#corl2024)! We show that instead of generative models, discriminative models can attain sim2real transfer! 👀🧵👇

English
0
1
15
1.1K
Simon LC
Simon LC@simonlc_·
This is a joint work with an amazing team! Jan Brüdigam, Ali Abbas, @initmaks, @KuanFang, Brandon Hung, Maya Guru, Stefan Sosnowski, Jiuguang Wang and Sandra Hirche Jan did an awesome job leading the project during his internship at @the_ai_inst. So proud of what we accomplished!
English
0
0
0
192
Simon LC
Simon LC@simonlc_·
We used reinforcement learning bootstrapped with expert planner demonstrations to learn robust policies. We deployed them on several hardware scenarios using Boston Dynamics' Spot
English
1
0
0
136
Simon LC
Simon LC@simonlc_·
I'm excited to share Jacta: A Versatile Planner for Learning Dexterous and Whole-body Manipulation. We use sampling-based planning to bootstrap policy learning methods for manipulation tasks. My friend, Jan Brüdigam is presenting the work today at CoRL! jacta-manipulation.github.io
English
3
13
71
5.5K
Simon LC retweetet
Albert Li
Albert Li@albert_h_li·
Excited to share our new📰, DROP: Dexterous Reorientation via Online Planning! Overview: 🔹We tackle cube rotation🧊♻️on hardware 🔹DROP is the first 🧊♻️sampling-based MPC demo. No reinforcement learning! 🔹Median 30.5 rotations w/o dropping, max of 81👑🦾 See 🧵below👇
English
1
25
101
16K
Simon LC retweetet
Naoki Yokoyama
Naoki Yokoyama@naokiyokoyama0·
Excited to share our latest work, Vision-Language Frontier Maps – a SOTA approach for semantic navigation in robotics. VLFM enables robots to navigate and find objects in novel environments using vision-language foundation models, zero-shot! Accepted to #ICRA2024! 🧵
English
1
37
203
37.3K