Hao Su Lab

144 posts

Hao Su Lab

Hao Su Lab

@HaoSuLabUCSD

Researching at the frontier of AI on topics of Computer Vision, Computer Graphics, Robotics, Embodied AI, and Reinforcement Learning @UCSanDiego @haosu_twitr

San Diego, CA Katılım Mayıs 2021
212 Takip Edilen3K Takipçiler
Hao Su Lab retweetledi
Shresth Grover
Shresth Grover@shroglc·
VLA models often forget their pretrained knowledge during action training, hurting generalization. 🤖Our framework unifies action & VLM training to preserve strong pretrained representations & maintain versatility, boosting generalization & robustness. gen-vla.github.io
English
1
6
37
3.9K
Hao Su Lab retweetledi
Bo Ai
Bo Ai@BoAi0110·
We presented our work on studying multi-embodiment scaling at #CoRL2025 and were thrilled by the excitement around generalist cross-embodiment policies. A common question was: Will you move beyond locomotion? The answer is YES! Locomotion provides a clean starting point, but our long-term goal is to extend these ideas to more challenging domains such as manipulation, paving the way toward general cross-embodiment intelligence. Huge thanks to the amazing team for making this happen across three time zones: project co-leads @LiuDai_DL, @NicoBohlinger, Dichen Li, together with @tongzhou_mu, @ZhanxinWu0725, K. Fay, and advisors @hiskov, @Jan_R_Peters, @haosu_twitr.
Bo Ai tweet mediaBo Ai tweet mediaBo Ai tweet media
Bo Ai@BoAi0110

🧠 Can a single robot policy control many, even unseen, robot bodies? We scaled training to 1000+ embodiments and found: More training bodies → better generalization to unseen ones. We call it: Embodiment Scaling Laws. A new axis for scaling. 🔗 embodiment-scaling-laws.github.io 🧵👇

English
0
12
50
9.4K
Hao Su Lab retweetledi
Bo Ai
Bo Ai@BoAi0110·
Tongxuan’s work explores using generative models (diffusion) for state estimation and world model learning in cloth manipulation, a domain with significant visual occlusion and complex dynamics. He will give a 5-minute talk at 2:00 pm in the Simulating Robot Worlds workshop today (simulatingrobotworlds.github.io/schedule.html), and a 1-minute spotlight presentation from 3:00–3:30 pm on Sep 28 at #CoRL.
Tongxuan Tian@txtian_259

How to capture complex environment dynamics accurately with partial observations for world modeling? 🧐 Thrilled to share our recent work on world models for robotic manipulation - UniClothDiff: Diffusion Dynamics Models with Generative State Estimation for Cloth Manipulation, accepted to #CoRL2025 🎉. We target on the challenging task of cloth manipulation, which involves partial observability due to severe self-occlusion, a high-dimensional state space, and highly non-linear dynamics. We enable robots, like humans, imagine the state of cloth through a mental model 🧠 and foresee its future state during folding! 🔗 uniclothdiff.github.io 📜 arxiv.org/abs/2503.11999

English
0
4
36
4.8K
Hao Su Lab retweetledi
Arth Shukla
Arth Shukla@arth_shukla·
📢 Introducing ManiSkill-HAB: A benchmark for low-level manipulation in home rearrangement tasks! - GPU-accelerated simulation - Extensive RL/IL baselines - Vision-based, whole-body control robot dataset All open-sourced: arth-shukla.github.io/mshab 🧵(1/5)
English
7
50
216
48.7K
Hao Su Lab retweetledi
Xuanlin Li (Simon)
Xuanlin Li (Simon)@XuanlinLi2·
Learning bimanual, contact-rich robot manipulation policies that generalize over diverse objects has long been a challenge. Excited to share our work: Planning-Guided Diffusion Policy Learning for Generalizable Contact-Rich Bimanual Manipulation! glide-manip.github.io 🧵1/n
English
1
18
74
20.8K
Hao Su Lab retweetledi
Hao Su Lab retweetledi
Xinyue Wei
Xinyue Wei@SarahWeii·
🚀 Thrilled to announce the release of the reproduced MeshLRM demo! 🎉 Generate textured 3D meshes from one or more unposed images in seconds. Check it out: huggingface.co/spaces/sudo-ai…
English
0
15
47
10.6K
Hao Su Lab retweetledi
Yuchen Zhou
Yuchen Zhou@yuchen010807·
While the Segment Anything Model (SAM) greatly improves 2D segmentation annotation efficiency, is there a foundation model that works for 3D point clouds and meshes like SAM? Introducing Point-SAM, a 3D prompt segmentation foundation model! 👇 point-sam.github.io
English
4
21
108
15.7K
Hao Su Lab retweetledi
Hao Su
Hao Su@haosu_twitr·
Join us at our first workshop on 3D Foundation Models @CVPR2024, June 18 in Summit 434, starting at 8:50AM! We have fantastic speakers to discuss the progress and prospects in 3D foundation models. Check out more details at 3dfm.github.io
Hao Su tweet media
English
1
8
92
17.1K
Hao Su Lab retweetledi
Nicklas Hansen
Nicklas Hansen@ncklashansen·
🥳Excited to share: Hierarchical World Models as Visual Whole-Body Humanoid Controllers Joint work with @jyothir_s_v @vlad_is_ai @ylecun @xiaolonw @haosu_twitr Our method, Puppeteer, learns high-dim humanoid policies that look natural, in an entirely data-driven way! 🧵👇(1/n)
English
11
61
377
250.7K
Hao Su Lab retweetledi
Hao Su
Hao Su@haosu_twitr·
#ICRA2024 @LinghaoChen97 will present our differentiable rendering-based hand-eye calibration method, EasyHec! May 16 13:30@CC-313 (oral); May 16 16:30-18:00@ThBT03.01 (poster) It produces accurate calibration results in a fully automatic manner! @LinghaoChen97" target="_blank" rel="nofollow noopener">ootts.github.io/easyhec/@Lingh
Hao Su Lab@HaoSuLabUCSD

Hand-eye calibration is critical for sim2real in robotics. We propose EasyHeC, a differentiable-rendering-based hand-eye calibration system that is highly accurate, automatic, & convenient, thus significantly reducing sim2real gap in object manipulation! ootts.github.io/easyhec/

English
0
4
32
7K
Hao Su Lab retweetledi
Stone Tao
Stone Tao@Stone_Tao·
Don’t have a real robot/setup but want to evaluate policies trained on real world datasets? Check out SIMPLER, fast, safe, and reliable evaluation of real robot policies in sim via ManiSkill 2. The ManiSkill 3 beta will port SIMPLER over soon so stay tuned!
Xuanlin Li (Simon)@XuanlinLi2

Scalable, reproducible, and reliable robotic evaluation remains an open challenge, especially in the age of generalist robot foundation models. Can *simulation* effectively predict *real-world* robot policy performance & behavior? Presenting SIMPLER!👇 simpler-env.github.io

English
2
6
58
19.9K
Hao Su Lab retweetledi
Xuanlin Li (Simon)
Xuanlin Li (Simon)@XuanlinLi2·
Scalable, reproducible, and reliable robotic evaluation remains an open challenge, especially in the age of generalist robot foundation models. Can *simulation* effectively predict *real-world* robot policy performance & behavior? Presenting SIMPLER!👇 simpler-env.github.io
English
4
24
136
86.2K
Hao Su Lab retweetledi
Stone Tao
Stone Tao@Stone_Tao·
📢 ManiSkill 3 beta is out! Simulate everything everywhere all at once 🥯 - 18K RGBD FPS on 1 GPU, 3K on Colab! - Diverse parallel GPU sim - Tons of new robots/tasks All open-sourced: github.com/haosulab/ManiS… Photo: MS3 Tasks w/ scenes from AI2THOR and ReplicaCAD 🧵(1/6)
Stone Tao tweet media
English
9
62
260
58.8K
Hao Su Lab retweetledi
Stone Tao
Stone Tao@Stone_Tao·
maniskill sneak peak 3: lots of new robots to use! Whether its mobile manipulation, humanoids, quadrupeds, or even tactile dextrous hands (see the shadow hand at the bottom with red tactile sensors), we have a ton of new domains being added to try out on GPU state/visual sim
Stone Tao tweet media
English
3
2
44
4.7K
Hao Su Lab retweetledi
Hao Su
Hao Su@haosu_twitr·
Checkout DG-Mesh from @Isabella__Liu, which reconstructs time-consistent, high-quality dynamic mesh with flexible topology change from monocular videos. liuisabella.com/DG-Mesh/
Isabella Liu@Isabella__Liu

Want to obtain time-consistent dynamic mesh from monocular videos? Introducing: Dynamic Gaussians Mesh: Consistent Mesh Reconstruction from Monocular Videos liuisabella.com/DG-Mesh/ We reconstruct meshes with flexible topology change and build the corresp. across meshes. 🧵(1/n)

English
1
4
46
5.8K