Chen Wang

321 posts

Chen Wang banner
Chen Wang

Chen Wang

@chenwang_j

Final-year CS PhD @Stanford. Prev @GoogleDeepMind @NVIDIA @MIT_CSAIL. Robotics/Manipulation

Stanford, CA Katılım Ocak 2021
853 Takip Edilen3.4K Takipçiler
Sabitlenmiş Tweet
Chen Wang
Chen Wang@chenwang_j·
Can we use wearable devices to collect robot data without actual robots? Yes! With a pair of gloves🧤! Introducing DexCap, a portable hand motion capture system that collects 3D data (point cloud + finger motion) for training robots with dexterous hands Everything open-sourced
English
25
135
624
233.9K
Chen Wang retweetledi
Ruohan Zhang
Ruohan Zhang@RuohanZhang76·
I will join Northwestern University Computer Science as an Assistant Professor in Fall 2026! I am actively recruiting PhD students and seeking collaborations in robotics, human-robot interaction, brain-computer interfaces, cognitive science, societal impact of AI & automation, and AI for art & design. Please see the recruitment announcement on my personal website, and feel free to reach out!
Ruohan Zhang tweet media
English
75
204
1.5K
610.8K
Haozhi Qi
Haozhi Qi@HaozhiQ·
I’m incredibly honored and thrilled to receive the Lofti A. Zadeh Prize🏆! Huge thanks to the EECS award committee, my advisors @YiMaTweets and @JitendraMalikCV, and all my amazing collaborators. Grateful for the support, mentorship, and inspiration throughout my PhD journey.
Yi Ma@YiMaTweets

It is great to know that my student Haozhi Qi @HaozhiQ , jointly supervised with Professor Jitendra Malik @JitendraMalikCV, is the recipient of the Lofti A. Zadeh Prize for 2024-25 given by the EECS Department of UC Berkeley for graduating PhD students. Congratulations!

English
19
5
197
16.1K
Chen Wang retweetledi
Yunfan Jiang
Yunfan Jiang@YunfanJiang·
🤖 Ever wondered what robots need to truly help humans around the house? 🏡 Introducing 𝗕𝗘𝗛𝗔𝗩𝗜𝗢𝗥 𝗥𝗼𝗯𝗼𝘁 𝗦𝘂𝗶𝘁𝗲 (𝗕𝗥𝗦)—a comprehensive framework for mastering mobile whole-body manipulation across diverse household tasks! 🧹🫧 From taking out the trash to laying out clothes and cleaning toilets—𝗕𝗥𝗦 equips robots to handle practical, everyday activities. 🌐 Explore more: behavior-robot-suite.github.io Let's dive in! 🤿🧵
English
18
128
418
185.2K
Chen Wang retweetledi
Toru
Toru@ToruO_O·
Sim2Real RL for Vision-Based Dexterous Manipulation on Humanoids toruowo.github.io/recipe/ TLDR - we train a humanoid robot with two multifingered hands to perform a range of dexterous manipulation tasks robust generalization and high performance without human demonstration :D
English
11
62
309
49.1K
Chen Wang retweetledi
Hao-Shu Fang
Hao-Shu Fang@haoshu_fang·
A good hand can push intelligence development. Introducing Eyesight Hand, equipped with full-hand high-res tactile sensors and proprioceptive actuators. It is compliant, agile, and powerful. Good tactile sensing makes learning more efficient and robust. Shout out to Branden!
English
7
44
232
23.4K
Chen Wang retweetledi
Simar Kareer
Simar Kareer@simar_kareer·
Introducing EgoMimic - just wear a pair of Project Aria @meta_aria smart glasses 👓 to scale up your imitation learning datasets! Check out what our robot can do. A thread below👇
English
10
52
239
49.3K
Chen Wang
Chen Wang@chenwang_j·
@DJiafei Thanks Jiafei for first introducing AR for data collection! We learned a lot from AR2-D2 and EVE
English
1
0
1
129
Jiafei Duan
Jiafei Duan@DJiafei·
Really cool to see more work leveraging AR for collecting robot data without a robot! Many times often people neglect that data collection is actually a HRI problem, like how can we make such collection process more ubiquitous and scalable. This is a good demonstration of that along with our work AR2-D2 and recently EVE presenting now at #UIST2024
Sirui Chen@eric_srchen

How can we collect high-quality robot data without teleoperation? AR can help! Introducing ARCap, a fully open-sourced AR solution for collecting cross-embodiment robot data (gripper and dex hand) directly using human hands. 🌐:stanford-tml.github.io/ARCap/ 📜:arxiv.org/abs/2410.08464

English
2
3
11
1.9K
Chen Wang
Chen Wang@chenwang_j·
Excited to introduce ARCap! We found that visual feedback is crucial for high-quality data collection, and AR can greatly help! We invited 20 novice users to each gather a small amount of data using only AR—no robot hardware required. The combined data can successfully train a robot policy!
Sirui Chen@eric_srchen

How can we collect high-quality robot data without teleoperation? AR can help! Introducing ARCap, a fully open-sourced AR solution for collecting cross-embodiment robot data (gripper and dex hand) directly using human hands. 🌐:stanford-tml.github.io/ARCap/ 📜:arxiv.org/abs/2410.08464

English
2
7
65
5.9K
Chen Wang retweetledi
Tianyuan Dai
Tianyuan Dai@RogerDai1217·
Why hand-engineer digital twins when digital cousins are free? Check out ACDC: Automated Creation of Digital Cousins 👭 for Robust Policy Learning, accepted at @corl2024! 🎉 📸 Single image -> 🏡 Interactive scene ⏩ Fully automatic (no annotations needed!) 🦾 Robot policies deployed zero-shot in original scene 🌐: digital-cousins.github.io
English
11
40
162
66.5K
Chen Wang retweetledi
AK
AK@_akhaliq·
Synchronize Dual Hands for Physics-Based Dexterous Guitar Playing discuss: huggingface.co/papers/2409.16… We present a novel approach to synthesize dexterous motions for physically simulated hands in tasks that require coordination between the control of two hands with high temporal precision. Instead of directly learning a joint policy to control two hands, our approach performs bimanual control through cooperative learning where each hand is treated as an individual agent. The individual policies for each hand are first trained separately, and then synchronized through latent space manipulation in a centralized environment to serve as a joint policy for two-hand control. By doing so, we avoid directly performing policy learning in the joint state-action space of two hands with higher dimensions, greatly improving the overall training efficiency. We demonstrate the effectiveness of our proposed approach in the challenging guitar-playing task. The virtual guitarist trained by our approach can synthesize motions from unstructured reference data of general guitar-playing practice motions, and accurately play diverse rhythms with complex chord pressing and string picking patterns based on the input guitar tabs that do not exist in the references. Along with this paper, we provide the motion capture data that we collected as the reference for policy training.
English
2
68
318
26.8K
Dhruv Shah
Dhruv Shah@shahdhruv_·
Excited to share that I will be joining @Princeton as an Assistant Professor in ECE & Robotics next academic year! 🐯🤖 robo.princeton.edu I am recruiting PhD students for the upcoming admissions cycle. If you are interested in working with me, please consider applying.
English
103
47
803
79.2K
Xiaolong Wang
Xiaolong Wang@xiaolonw·
I am deeply honored to receive the 2024 J.K. Aggarwal Prize! It is humbling to follow in the footsteps of such an esteemed group of AI researchers. iapr.org/awards/icpr-aw…
English
22
3
227
15.3K
Elliott / Shangzhe Wu
Elliott / Shangzhe Wu@elliottszwu·
I will be joining @Cambridge_Eng as an Assistant Professor in spring 2025, together with @_atewari. Clearly have been missing the good old UK rain after a wonderful year in California. Looking forward to opening this new chapter with brilliant colleagues and students!
English
69
16
380
75.8K
Chen Wang
Chen Wang@chenwang_j·
We found that the relations between keypoints are a powerful way to represent tasks. What’s more exciting is that these keypoint relations can be formulated as constraint satisfaction problems, allowing us to use off-the-shelf optimization solvers to generate complex robot actions. Check out @wenlong_huang's thread for more on how we automate the keypoint constraints generation process and how this enables robots to perform reactive, bimanual, and long-horizon tasks!
Wenlong Huang@wenlong_huang

What structural task representation enables multi-stage, in-the-wild, bimanual, reactive manipulation? Introducing ReKep: LVM to label keypoints & VLM to write keypoint-based constraints, solve w/ optimization for diverse tasks, w/o task-specific training or env models. 🧵👇

English
0
8
70
6.9K
Chen Wang
Chen Wang@chenwang_j·
We are hosting the dexterous manipulation workshop at CoRL this year 🤖! We'll dive into topics like visual & tactile perception, skill learning, and control. Don’t miss the opportunity to share your amazing works and participate! dex-manipulation.github.io/corl2024/
Haozhi Qi@HaozhiQ

🎺 Announcing our CoRL 2024 “Learning Robot Fine and Dexterous Manipulation: Perception and Control” in Munich. Join us to hear from an incredible lineup of speakers! And don’t miss the opportunity to submit your work and participate! Checkout: dex-manipulation.github.io/corl2024/

English
0
3
40
4.2K
Chen Wang retweetledi
Google DeepMind
Google DeepMind@GoogleDeepMind·
Meet our AI-powered robot that’s ready to play table tennis. 🤖🏓 It’s the first agent to achieve amateur human level performance in this sport. Here’s how it works. 🧵
English
128
772
3.8K
784.3K
Zhijian Liu
Zhijian Liu@zhijianliu_·
📢 Excited to share that I’ll be joining @UCSanDiego @HDSIUCSD as an Assistant Professor in January 2026! My lab will focus on efficient AI. I'm recruiting PhD students from HDSI/CSE in the Fall 2024 cycle and also looking for RAs/interns! More info see zhijianliu.com.
English
39
35
447
91.5K
Dhruv Shah
Dhruv Shah@shahdhruv_·
I “defended” my thesis earlier today — super grateful to @svlevine and everyone at @berkeley_ai for their support through the last 5 years! 🐻 Excited to be joining @GoogleDeepMind and continue the quest for bigger, better, smarter robot brains 🤖🧠
Dhruv Shah tweet media
English
58
10
495
37.8K
Chen Wang retweetledi
Qiayuan Liao
Qiayuan Liao@qiayuanliao·
Excited to share a new humanoid robot platform we’ve been working on. Berkeley Humanoid is a reliable and low-cost mid-scale research platform for learning-based control. We demonstrate the robot walking on various terrains and dynamic hopping with a simple RL controller.
English
17
77
360
80.3K