Ying Yuan

30 posts

Ying Yuan

Ying Yuan

@Ying_yyyyyyyy

First-year PhD student @CMU_Robotics | CS Undergraduate @Tsinghua_Uni, Yao Class | Intern @UCSanDiego

Pittsburgh, PA Katılım Mart 2023
150 Takip Edilen354 Takipçiler
Sabitlenmiş Tweet
Ying Yuan
Ying Yuan@Ying_yyyyyyyy·
Finally we made it work through open-loop replaying sim actions! Before that, Sim2Real process was always a headache. That said, simulation training is still pretty valuable as a way of efficiently exploring possible solutions to a dynamic, contact-rich task like “pen spinning”.
Haozhi Qi@HaozhiQ

When I started my first project on in-hand manipulation, I thought it would be super cool but also quite challenging to make my robot hands spin pens. After almost 2.5 years of effort in this line of research, we have finally succeeded in making our robot hand "spin pens."

English
2
5
57
9.5K
Ying Yuan
Ying Yuan@Ying_yyyyyyyy·
Because of you, I couldn’t charge and ended up dealing with a ticket. If your charger’s down, move to another spot - don’t make it everyone else’s issue.
English
0
0
0
103
Ying Yuan
Ying Yuan@Ying_yyyyyyyy·
To the Tesla driver who parked at @CarnegieMellon East Campus Garage today: You don’t get to hijack a charging port from across the garage just because yours wasn’t working.
English
1
1
1
223
Ying Yuan retweetledi
Unitree
Unitree@UnitreeRobotics·
Unitree Spring Festival Gala Robots —a Full Release of Additional Details 🥳 Dozens of G1 robots achieved the world’s first fully autonomous humanoid robot cluster Kung Fu performance (with quick movement), pushing motion limits and setting multiple world firsts! H2 made striking appearances at both the Beijing main venue and the Yiwu sub-venue, clad in the Monkey King’s heavy armor and riding a “somersault cloud” played by B2W quadruped robot dogs, delivering New Year blessings from the clouds.
English
1.2K
4.4K
27.1K
27.9M
Ying Yuan retweetledi
Hongyi Chen
Hongyi Chen@chen_hongyi_·
How far can we push dexterous robot manipulation with human video-only supervision and minimal assumptions? 🚫 No teleop. 🚫 No wearables. 🚫 No external sensors. 🚫 No robot demos. Introducing VIDEOMANIP: 🎥 Just monocular RGB, 🌍 in-the-wild human video → dexterous robot manipulation 🤚[1/6]
English
5
35
162
23K
Ying Yuan retweetledi
BAAI
BAAI@BAAIBeijing·
Pushing the boundaries of humanoids with THOR: Towards Human-level whOle-body Reaction. BAAI THOR is coming soon. #Robotics #AI #WholeBodyControl
English
16
88
418
94.4K
Ying Yuan retweetledi
Yishu Li
Yishu Li@LisaYishu·
A closed door looks the same whether it pushes or pulls. Two identical-looking boxes might have different center of mass. How should robots act when a single visual observation isn't enough? Introducing HAVE 🤖, our method that reasons about past interactions online! #CORL2025
Yishu Li tweet media
English
1
19
42
7.5K
Ying Yuan retweetledi
Yufei Wang
Yufei Wang@YufeiWang25·
Introducing ArticuBot🤖at #RSS2025, in which we learn a single policy for manipulating diverse articulated objects across 3 robot embodiments in different labs, kitchens & lounges, achieved via large-scale simulation and hierarchical imitation learning. articubot.github.io 🧵
English
3
31
89
6.9K
Ying Yuan retweetledi
Tony Tao
Tony Tao@_tonytao_·
Training robots for the open world needs diverse data But collecting robot demos in the wild is hard! Presenting DexWild 🙌🏕️ Human data collection system that works in diverse environments, without robots 💪🦾 Human + Robot Cotraining pipeline that unlocks generalization 🧵👇
English
10
76
333
96.3K
Haozhi Qi
Haozhi Qi@HaozhiQ·
I’m incredibly honored and thrilled to receive the Lofti A. Zadeh Prize🏆! Huge thanks to the EECS award committee, my advisors @YiMaTweets and @JitendraMalikCV, and all my amazing collaborators. Grateful for the support, mentorship, and inspiration throughout my PhD journey.
Yi Ma@YiMaTweets

It is great to know that my student Haozhi Qi @HaozhiQ , jointly supervised with Professor Jitendra Malik @JitendraMalikCV, is the recipient of the Lofti A. Zadeh Prize for 2024-25 given by the EECS Department of UC Berkeley for graduating PhD students. Congratulations!

English
19
5
197
16.1K
Ying Yuan retweetledi
Xialin He
Xialin He@Xialin_He·
🤖 Want to train a humanoid to stand up safely and smoothly? Try HumanUP: Sim-to-Real Humanoid Getting-Up Policy Learning! 🚀 ✨ HumanUP is a two-stage RL framework that enables humanoid robots to stand up from any pose(facing up and down) with stability and safety. Check out our new project: Learning Getting-up Policies for Real-world Humanoid Robots! A huge thanks to our team for making this possible! 💪🎉@RunpeiDong @C___eric417 @_saurabhg Project page: humanoid-getup.github.io Paper: arxiv.org/abs/2502.12152
English
5
35
170
28.2K
Ying Yuan
Ying Yuan@Ying_yyyyyyyy·
Can’t wait to try it out!
Zhou Xian@zhou_xian_

Everything you love about generative models — now powered by real physics! Announcing the Genesis project — after a 24-month large-scale research collaboration involving over 20 research labs — a generative physics engine able to generate 4D dynamical worlds powered by a physics simulation platform designed for general-purpose robotics and physical AI applications. Genesis's physics engine is developed in pure Python, while being 10-80x faster than existing GPU-accelerated stacks like Isaac Gym and MJX. It delivers a simulation speed ~430,000 faster than in real-time, and takes only 26 seconds to train a robotic locomotion policy transferrable to the real world on a single RTX4090 (see tutorial: genesis-world.readthedocs.io/en/latest/user…). The Genesis physics engine and simulation platform is fully open source at github.com/Genesis-Embodi…. We'll gradually roll out access to our generative framework in the near future. Genesis implements a unified simulation framework all from scratch, integrating a wide spectrum of state-of-the-art physics solvers, allowing simulation of the whole physical world in a virtual realm with the highest realism. We aim to build a universal data engine that leverages an upper-level generative framework to autonomously create physical worlds, together with various modes of data, including environments, camera motions, robotic task proposals, reward functions, robot policies, character motions, fully interactive 3D scenes, open-world articulated assets, and more, aiming towards fully automated data generation for robotics, physical AI and other applications. Open Source Code: github.com/Genesis-Embodi… Project webpage: genesis-embodied-ai.github.io Documentation: genesis-world.readthedocs.io 1/n

English
0
0
1
514
Ying Yuan retweetledi
Jia-Bin Huang
Jia-Bin Huang@jbhuang0604·
As my kids are singing APT non-stop these days, I did a bit of reverse engineering of the APT music video and tried to understand why the MV is so addictive. Here is what I learned.
Jia-Bin Huang tweet media
English
24
81
865
254.7K
Ying Yuan retweetledi
Binghao Huang
Binghao Huang@binghao_huang·
Want to use tactile sensing but not familiar with hardware? No worries! Just follow the steps, and you’ll have a high-resolution tactile sensor ready in 30 mins! It’s as simple as making a sandwich! 🥪 🎥 YouTube Tutorial: youtube.com/watch?v=8eTpFY… 🛠️ Open Source & Hardware Guide: github.com/binghao-huang/… 🌐 Project Website: binghao-huang.github.io/3D-ViTac/ Let’s make robotics more tactile! 🤖 #Robotics #TactileSensing #AI #opensource
YouTube video
YouTube
English
6
24
159
31.7K
Ying Yuan retweetledi
Eric Cai
Eric Cai@eywcai·
Introducing TAX3D, in which we extend relative-placement methods to generalizable deformable manipulation! #CoRL2024 (1/🧵) Our approach generalizes to: - Diverse unseen objects - Diverse unseen configurations - Multimodal placements
GIF
English
2
15
49
7.4K
Ying Yuan retweetledi
Yanjie Ze
Yanjie Ze@ZeYanjie·
We’ve seen humanoid robots walk around for a while, but when will they actually help with useful tasks in daily life? The challenge here is the diversity and complexity of real-world scenes. Our new work tackles this problem via 3D visuomotor policy learning. Using data from only 1 scene, our Improved 3D Diffusion Policy (iDP3) enables a full-sized humanoid robot to autonomously pick&place objects, pour water, and wipe tables, in the wild open world. (and all these skills are useful, right?) Web: humanoid-manipulation.github.io Fully open-sourced code: github.com/YanjieZe/Impro…
English
8
71
345
75.1K
Ying Yuan retweetledi
Carl Qi
Carl Qi@carl_qi98·
How can an autonomous agent leverage novel tools to cut, roll, and scoop a piece of dough, given just a few tool shapes for training? Our method generates a “desired tool shape” that performs the motion and then matches the real tool to the generated shape.sites.google.com/view/toolgen
GIF
English
1
13
79
21.4K
Ying Yuan
Ying Yuan@Ying_yyyyyyyy·
Really learned a lot from this work and lucky to have such a great team together working on this fun project!
English
0
0
6
445
Ying Yuan
Ying Yuan@Ying_yyyyyyyy·
Finally we made it work through open-loop replaying sim actions! Before that, Sim2Real process was always a headache. That said, simulation training is still pretty valuable as a way of efficiently exploring possible solutions to a dynamic, contact-rich task like “pen spinning”.
Haozhi Qi@HaozhiQ

When I started my first project on in-hand manipulation, I thought it would be super cool but also quite challenging to make my robot hands spin pens. After almost 2.5 years of effort in this line of research, we have finally succeeded in making our robot hand "spin pens."

English
2
5
57
9.5K