Helping Hands Lab @ Northeastern

91 posts

Helping Hands Lab @ Northeastern banner
Helping Hands Lab @ Northeastern

Helping Hands Lab @ Northeastern

@HelpingHandsLab

🤖 Robotic Manipulation | Reinforcement Learning | PI Rob Platt @RobotPlatt @KhouryCollege @Northeastern

Boston, Massachusetts 参加日 Mart 2022
89 フォロー中315 フォロワー
固定されたツイート
Helping Hands Lab @ Northeastern
Helping Hands Lab @ Northeastern@HelpingHandsLab·
SO(3)-equivariant policy learning in the RGB space needs much more exploration. Really like how this work streamlines the pipeline while preserving full symmetry. Amazing work from Boce. Highly recommend checking it out. See you at #NeurIPS2025!
Boce Hu@boce_hu

Closed-loop visuomotor control with wrist cameras is widely adopted and powerful, but leveraging full 3D symmetry from only RGB is still hard. Introducing our #NeurIPS2025 Spotlight paper, Image-to-Sphere Policy (ISP), for equivariant policy learning from eye-in-hand RGB images.

English
0
1
4
430
Helping Hands Lab @ Northeastern がリツイート
Boce Hu
Boce Hu@boce_hu·
Closed-loop visuomotor control with wrist cameras is widely adopted and powerful, but leveraging full 3D symmetry from only RGB is still hard. Introducing our #NeurIPS2025 Spotlight paper, Image-to-Sphere Policy (ISP), for equivariant policy learning from eye-in-hand RGB images.
English
2
18
39
10K
Helping Hands Lab @ Northeastern がリツイート
Dian Wang
Dian Wang@Dian_Wang_·
Equivariant policy typically requires depth for symmetric representation, what if we only have a wrist-mounted RGB camera? ISP projects RGB image onto a sphere for 3D equivariant reasoning. @boce_hu and I will be at #NeurIPS2025 to present this paper, Fri. poster session 6 #2315!
Boce Hu@boce_hu

Closed-loop visuomotor control with wrist cameras is widely adopted and powerful, but leveraging full 3D symmetry from only RGB is still hard. Introducing our #NeurIPS2025 Spotlight paper, Image-to-Sphere Policy (ISP), for equivariant policy learning from eye-in-hand RGB images.

English
0
9
59
8.1K
Helping Hands Lab @ Northeastern
Helping Hands Lab @ Northeastern@HelpingHandsLab·
Our new #ICML2025 paper, led by @ZhaoHaibo47588, presents a hierarchical equivariance architecture that enables multi-level sample efficiency in visuomotor policy learning. Check it out for more details!
Haibo Zhao@ZhaoHaibo47588

Excited to share our #ICML2025 paper, Hierarchical Equivariant Policy via Frame Transfer. Our Frame Transfer interface imposes high-level decision as a coordinate frame change in the low-level, boosting sim performance by 20%+ and enabling complex manipulation with 30 demos.

English
0
0
1
186
Helping Hands Lab @ Northeastern がリツイート
Helping Hands Lab @ Northeastern
Helping Hands Lab @ Northeastern@HelpingHandsLab·
#CoRL2024 IMAGINATION POLICY: Using Generative Point Cloud Models for Learning Manipulation Policies Led by @HaojieHuang13. A key-frame multi-task policy can generate key poses (imagine) and do manipulation precisely with sample efficiency. Presenting at Poster Session 4.
Helping Hands Lab @ Northeastern tweet media
English
1
9
18
4.9K
Helping Hands Lab @ Northeastern
Helping Hands Lab @ Northeastern@HelpingHandsLab·
#CoRL2024 ThinkGrasp: A Vision-Language System for Strategic Part Grasping in Clutter by @RubyFreax. A plug-and-play vision-language grasping system that uses GPT-4o’s advanced contextual reasoning for heavy clutter environment grasping strategies. Presenting at Poster Session 3.
Helping Hands Lab @ Northeastern tweet media
English
1
9
15
1.8K
Helping Hands Lab @ Northeastern
Helping Hands Lab @ Northeastern@HelpingHandsLab·
#CoRL2024 Equivariant Diffusion Policy Led by @Dian_Wang_. A sample efficient BC algorithm based on equi diffusion. It leverages symmetry to boost learning with 5x less training data and mastering complex tasks with <60 demos. Presenting at Oral Session 1 and Poster Session 2.
English
2
11
56
9.2K
Helping Hands Lab @ Northeastern
Helping Hands Lab @ Northeastern@HelpingHandsLab·
#CoRL2024 Leveraging Mutual Information for Asymmetric Learning under Partial Observability led by @HaiNguy69482974 Addressing asymmetric learning under partial observability (state availability at training) by rewarding actions leads to histories that gain info about the state.
Helping Hands Lab @ Northeastern tweet media
English
1
7
11
1.6K
Helping Hands Lab @ Northeastern
Helping Hands Lab @ Northeastern@HelpingHandsLab·
#CoRL2024 OrbitGrasp: SE(3)-Equivariant Grasp Learning Led by @boce_hu. Orbitgrasp maps each point in the cloud to a continuous grasp quality function using spherical harmonics. Our method outperforms all baselines across all settings and tasks. Presenting at Poster Session 1.
Helping Hands Lab @ Northeastern tweet media
English
1
5
19
1.8K
Helping Hands Lab @ Northeastern がリツイート
Evangelos Chatzipantazis
Evangelos Chatzipantazis@EChatzipantazis·
Join us on Monday October. 14th at 2pm (UTC+4) in #IROS2024 Workshop on Equivariant Robotics. A great lineup of keynote speakers will discuss how symmetry penetrates each and every subfield of robotics. Website and Zoom Link: equirob2024.github.io
Evangelos Chatzipantazis tweet media
English
1
9
25
3.1K
Helping Hands Lab @ Northeastern がリツイート
Linfeng Zhao
Linfeng Zhao@LinfengZhaoZLF·
We have released the YouTube recording of our #RSS2024 workshop on "Geometric and Algebraic Structure in Robot Learning"! 🎥Youtube: youtube.com/playlist?list=… 🏠Webpage: sites.google.com/view/gas-rl-rs…
Linfeng Zhao@LinfengZhaoZLF

🤖 Excited for #RSS2024? Don’t miss our workshop on "Geometric and Algebraic Structure in Robot Learning"! Submit workshop papers (by 6/10 AOE) and dive into discussions on leveraging these structures for enhanced #robotics. 🚀🔍 Join us on 07/19 in Delft, Netherlands!

English
0
10
27
4.7K
Helping Hands Lab @ Northeastern がリツイート
Robert Platt
Robert Platt@RobotPlatt·
Instead of inferring a desired object pose directly, this method "imagines" a reconstruction of the entire scene in the target pose. Surprisingly, we find that this improves sample efficiency, even though we are inferring more information.
Dian Wang@Dian_Wang_

Checkout our new work, Imagination Policy. We leverage a point cloud diffusion model to “imagine” a target scene, then use SVD to calculate rigid transformations that bring objects to the imagined scene as robot actions. More importantly, Imagination Policy is bi-equivariant!

English
0
1
8
1.5K