Peter Chen

150 posts

Peter Chen

Peter Chen

@peterxichen

Building general purpose robotics at Amazon FAR. Previously Covariant CEO and Co-Founder, @OpenAI, @UCBerkeley PhD.

Tham gia Aralık 2017
2.5K Đang theo dõi4K Người theo dõi
Peter Chen đã retweet
Chen Feng
Chen Feng@simbaforrest·
🤖We are hiring multiple Summer'26 Research Interns at @amazon FAR to work on open-world navigation and robot foundation models, especially in neural rendering & simulation/predictive world models/reasoning & agency/real-world evaluation/long-term autonomy!
Chen Feng tweet mediaChen Feng tweet mediaChen Feng tweet media
English
13
17
391
36.6K
Peter Chen đã retweet
Zhen Wu
Zhen Wu@zhenkirito123·
Can humanoids perform agile, autonomous, long-horizon parkour—based on what they see in the world? We present 𝗣𝗲𝗿𝗰𝗲𝗽𝘁𝗶𝘃𝗲 𝗛𝘂𝗺𝗮𝗻𝗼𝗶𝗱 𝗣𝗮𝗿𝗸𝗼𝘂𝗿 (𝗣𝗛𝗣): a framework that chains dynamic human skills using onboard depth perception for long-horizon traversal. 1/6
English
23
135
692
137.4K
Peter Chen đã retweet
Carlo Sferrazza
Carlo Sferrazza@carlo_sferrazza·
We introduce a framework that enables robust, long-horizon bi-directional locomotion over complex terrains, by effectively leveraging a single policy with dual-depth camera streams, without the need for LiDAR-based elevation maps. Check out @Yuanhang__Zhang's tweet for architecture and implementation details, and the website for full, uncut rollouts.
Yuanhang Zhang@Yuanhang__Zhang

Robust humanoid perceptive locomotion is still underexplored. Especially when different cameras see different terrains, paths get narrow, and payloads disturb balance... Introduce RPL, tackling this with one unified policy: • Challenging terrains (slopes, stairs and stepping stones); • Multiple directions; • Payloads; Trained in sim. Validated long-horizon in the real world. Watch the robot walk it all🦿 Details below👇

English
1
7
45
9.6K
Peter Chen đã retweet
Yuanhang Zhang
Yuanhang Zhang@Yuanhang__Zhang·
Robust humanoid perceptive locomotion is still underexplored. Especially when different cameras see different terrains, paths get narrow, and payloads disturb balance... Introduce RPL, tackling this with one unified policy: • Challenging terrains (slopes, stairs and stepping stones); • Multiple directions; • Payloads; Trained in sim. Validated long-horizon in the real world. Watch the robot walk it all🦿 Details below👇
English
5
57
275
56.5K
Peter Chen đã retweet
Younggyo Seo
Younggyo Seo@younggyoseo·
Tired of waiting hours for humanoids to learn to walk? Our new technical report shows how to train sim-to-real humanoid locomotion in 15 minutes with FastSAC and FastTD3! The full pipeline is open-source in the newly released Holosoma codebase. Thread 🧵
English
6
39
182
35.2K
Peter Chen đã retweet
Rocky Duan
Rocky Duan@rocky_duan·
Excited to share this latest work from our team! Holosoma is now our go-to option for humanoid research at FAR, and we will continue to maintain it and add new capabilities in the future. We're also hiring! Research: amazon.jobs/en/jobs/285057… Software: amazon.jobs/en/jobs/304305…
Carlo Sferrazza@carlo_sferrazza

Sim-to-real learning for humanoid robots is a full-stack problem. Today, Amazon FAR is releasing a full-stack solution: Holosoma. To accelerate research, we are open-sourcing a complete codebase covering multiple simulation backends, training, retargeting, and real-world inference.

English
0
12
76
19.6K
Peter Chen đã retweet
Carlo Sferrazza
Carlo Sferrazza@carlo_sferrazza·
Sim-to-real learning for humanoid robots is a full-stack problem. Today, Amazon FAR is releasing a full-stack solution: Holosoma. To accelerate research, we are open-sourcing a complete codebase covering multiple simulation backends, training, retargeting, and real-world inference.
English
20
133
599
209K
Peter Chen đã retweet
Pieter Abbeel
Pieter Abbeel@pabbeel·
Open-source: complete codebase covering multiple simulation backends, training, retargeting, and real-world inference. Infra built for humanoid, but also readily modified for quadruped (also included). Lots of infra gems/conveniences we rely on consistently. Hopefully equally helpful for others.
Carlo Sferrazza@carlo_sferrazza

Sim-to-real learning for humanoid robots is a full-stack problem. Today, Amazon FAR is releasing a full-stack solution: Holosoma. To accelerate research, we are open-sourcing a complete codebase covering multiple simulation backends, training, retargeting, and real-world inference.

English
11
54
474
75.9K
Peter Chen đã retweet
Yanjie Ze
Yanjie Ze@ZeYanjie·
14/n Visuomotor policy learning ...but can also use feet to kick a T-shaped box to the target region, i.e., Humanoid Kick-T.
English
1
2
40
10.2K
Peter Chen đã retweet
Yanjie Ze
Yanjie Ze@ZeYanjie·
Excited to introduce TWIST2, our next-generation humanoid data collection system. TWIST2 is portable (use anywhere, no MoCap), scalable (100+ demos in 15 mins), and holistic (unlock major whole-body human skills). Fully open-sourced: yanjieze.com/TWIST2
English
24
109
498
98.5K
Peter Chen đã retweet
Atli Kosson
Atli Kosson@AtliKosson·
Why override µP? Because its core assumptions only hold very early in training! In practice wide models quickly stop being more sensitive to weight updates than smaller models! This is caused by changes in the geometric alignment of updates and layer inputs over training. 🧵6/8
Atli Kosson tweet media
English
2
10
70
42K
Peter Chen đã retweet
Zhen Wu
Zhen Wu@zhenkirito123·
I've long wondered if we can make a humanoid robot do a 𝘄𝗮𝗹𝗹𝗳𝗹𝗶𝗽 - and we just made it happen by leveraging 𝗢𝗺𝗻𝗶𝗥𝗲𝘁𝗮𝗿𝗴𝗲𝘁 with BeyondMimic tracking! This came after our original OmniRetarget experiments, with only minor tweaks to RL training: relaxing a termination threshold and removing one reward term. The policy achieved a 𝟱/𝟱 success rate in our real-world experiments, showing the strength of high-quality, interaction-preserving motion retargeting combined with BeyondMimic’s minimal RL tracking. Here is the updated arXiv: arxiv.org/abs/2509.26633 (In Sec. V. A)
Zhen Wu@zhenkirito123

Humanoid motion tracking performance is greatly determined by retargeting quality! Introducing 𝗢𝗺𝗻𝗶𝗥𝗲𝘁𝗮𝗿𝗴𝗲𝘁🎯, generating high-quality interaction-preserving data from human motions for learning complex humanoid skills with 𝗺𝗶𝗻𝗶𝗺𝗮𝗹 RL: - 5 rewards, - 4 DR terms, - Proprio. ONLY, - NO history/curriculum. Ready for agile, human-like 🤖? (Best with 🎧) 🔗 omniretarget.github.io 🎥 1/9

English
176
537
3.9K
1.1M
Peter Chen đã retweet
Peter Chen đã retweet
Zhen Wu
Zhen Wu@zhenkirito123·
Our grand finale: A complex, long-horizon dynamic sequence, all driven by a proprioceptive-only policy (no vision/LIDAR)! In this task, the robot carries a chair to a platform, uses it as a step to climb up, then leaps off and performs a parkour-style roll to absorb the landing. This pushes the boundaries of agile, human-like loco-manipulation! 7/9
English
5
26
156
31.5K
Peter Chen đã retweet
Pieter Abbeel
Pieter Abbeel@pabbeel·
Very excited to start sharing some of the work we have been doing at Amazon FAR. In this work we present OmniRetarget, which can generate high-quality interaction-preserving data from human motions for learning complex humanoid skills. High-quality re-targeting really helps the reinforcement learning. Why? The control policy optimization landscape is much nicer for (near)feasible trajectories than for trajectories with artifacts like (e.g.) foot skating or ground penetration.
Zhen Wu@zhenkirito123

Humanoid motion tracking performance is greatly determined by retargeting quality! Introducing 𝗢𝗺𝗻𝗶𝗥𝗲𝘁𝗮𝗿𝗴𝗲𝘁🎯, generating high-quality interaction-preserving data from human motions for learning complex humanoid skills with 𝗺𝗶𝗻𝗶𝗺𝗮𝗹 RL: - 5 rewards, - 4 DR terms, - Proprio. ONLY, - NO history/curriculum. Ready for agile, human-like 🤖? (Best with 🎧) 🔗 omniretarget.github.io 🎥 1/9

English
15
41
342
83.2K
Peter Chen đã retweet
Guanya Shi
Guanya Shi@GuanyaShi·
High-quality motion reference data is key for humanoid skill learning 🤖🕺💃 A natural idea is to leverage human motions and “translate” them to humanoid motions, a process known as retargeting. For interaction-rich tasks such as scene interaction and loco-manipulation, retargeting is challenging: it must ensure motion consistency, smoothness, kinematic feasibility (no artifacts like penetration or foot skating), and scalability (one framework can handle thousands of motions). Excited to release OmniRetarget — a scalable retargeting method with a 4-hour high-quality humanoid motion dataset for interaction-rich tasks. OmniRetarget takes an interaction-preserving perspective: we optimize Laplacian deformation between source and target interaction meshes while enforcing kinematic constraints, producing consistent, smooth, and feasible trajectories at scale. Even better, OmniRetarget can efficiently augment motions by varying terrains, objects, and initial poses. This high-quality interaction-preserving retargeting enables a minimal RL setup to execute long-horizon (up to 30s) agile, interaction-rich skills. All tasks in the video share just 5 rewards, 4 domain randomization terms, and rely only on proprioception. More details: omniretarget.github.io
Zhen Wu@zhenkirito123

Humanoid motion tracking performance is greatly determined by retargeting quality! Introducing 𝗢𝗺𝗻𝗶𝗥𝗲𝘁𝗮𝗿𝗴𝗲𝘁🎯, generating high-quality interaction-preserving data from human motions for learning complex humanoid skills with 𝗺𝗶𝗻𝗶𝗺𝗮𝗹 RL: - 5 rewards, - 4 DR terms, - Proprio. ONLY, - NO history/curriculum. Ready for agile, human-like 🤖? (Best with 🎧) 🔗 omniretarget.github.io 🎥 1/9

English
3
11
84
13.3K