Jiageng Mao

89 posts

Jiageng Mao

Jiageng Mao

@PointsCoder

CS PhD Student at @USC. Student Researcher at @GoogleDeepMind. Recipient of @NVIDIA & @Qualcomm Fellowships. Prev. @NVIDIAAI.

Los Angeles, CA Katılım Temmuz 2021
432 Takip Edilen696 Takipçiler
Jiageng Mao
Jiageng Mao@PointsCoder·
Back in 2024, I talked to my labmates about the idea of leveraging massive ego-centric human videos for humanoid manipulation. Now we have made it! Happy to share our humanoid foundation model, Psi-0!
Yue Wang@yuewang314

Introducing Ψ₀ (psi-lab.ai/Psi0) — an open foundation model for universal humanoid loco-manipulation. 🏆 Outperforms GR00T N1.6 by 40%+ overall success rate 📉 Uses only ~10% of the pre-training data 📦 Fully open-source: model, data, code, and deployment pipeline 1/10

English
1
2
17
1.2K
Jiageng Mao retweetledi
Yue Wang
Yue Wang@yuewang314·
Introducing the USC Physical Superintelligence (PSI) Lab (psi-lab.ai). We are rebranding to better reflect our current focus. From here on out, we are tackling one thing: solving robotics and physical intelligence with every model, every bug, and every line of code. And yes, we are hiring at all levels, especially PhDs in this cycle and potential PostDocs who are excited about robotics. We hope you can join us in this journey! 1/9
English
9
39
349
24.8K
Jiageng Mao
Jiageng Mao@PointsCoder·
I am incredibly honored to receive the NVIDIA Graduate Fellowship this year! I'd like to express my sincere gratitude to @yuewang314 and @daniel_t_seita for their strongest support during the application. I'd also like to thank the AV group at @NVIDIAResearch: @drmapavone,@iamborisi,@Boyiliee,@Yuxiao_Chen_,@yan_wang_9, Yurong You, @ChaoweiX, @danfei_xu for hosting me last and next summer. Their guidance has been instrumental to my research journey. It's inspiring to witness @nvidia's continuous commitment to pioneering the future of Physical AI. 🤖🚀 blogs.nvidia.com/blog/graduate-… #NVIDIAFellowship #PhysicalAI #Robotics #AutonomousVehicles
English
7
0
64
4.4K
Jiageng Mao retweetledi
Scott Zhiyuan Gao
Scott Zhiyuan Gao@ScottZhiyuanGao·
See you this Friday, Dec 5, 4:30–7:30 PM PST at Exhibit Halls C/D/E, Booth #4809. Huge thanks to my amazing coauthors and advisors!
Jiageng Mao@PointsCoder

"Who has seen the wind? Neither I nor you: But when the leaves hang trembling, the wind is passing through." @ScottZhiyuanGao will present our work "Seeing the Wind from a Falling Leaf" at #NeurIPS2025! We teach AI to "see" invisible force fields directly from video using differentiable physics. Project Page: chaoren2357.github.io/seeingthewind/ Paper: arxiv.org/abs/2512.00762

English
0
1
4
863
Jiageng Mao
Jiageng Mao@PointsCoder·
(4/5) Physics-based Video Editing Once we capture the wind field, we can seamlessly insert new objects into the original video. These virtual objects interact with the real estimated wind, creating physically consistent simulations.
English
0
0
3
251
Jiageng Mao
Jiageng Mao@PointsCoder·
(3/5) We jointly model Geometry, Physics, & Interactions. By integrating a differentiable physics simulator, we use backpropagation to minimize the error between simulated and observed motion, effectively "seeing" the force field.
Jiageng Mao tweet media
English
0
0
1
212
Jiageng Mao
Jiageng Mao@PointsCoder·
(2/5) Humans have "Intuitive Physics"—we watch a leaf twirl and know the wind speed. But for computer vision, this is incredibly hard. We propose an end-to-end Inverse Graphics framework that recovers complex, non-rigid force fields (like wind) purely from RGB pixels. No sensors, just vision.
English
1
0
2
253
Jiageng Mao retweetledi
Ilir Aliu
Ilir Aliu@IlirAliu_·
A robot could learn a task just by watching a generated video? PhysWorld connects video generation with real-world robot learning. It turns visual imagination into physical skill. ✅ Takes one image and a task prompt ✅ Generates a video showing how to complete the task ✅ Reconstructs a 3D scene from that video ✅ Learns real-world actions through object-centric RL The result: zero-shot robotic manipulation that needs no real demonstrations. Across pouring, inserting, sweeping, and placing tasks, success rates rise by 15% compared to earlier video-based learning. It’s one of the first real steps toward robots that can learn from visual reasoning itself. Thanks for sharing, @PointsCoder !!! 📍Paper: arxiv.org/abs/2511.07416 Project: pointscoder.github.io/PhysWorld_Web Interactive Demo: hesic73.github.io/OpenReal2Sim_d… —- Weekly robotics and AI insights. Subscribe free: scalingdeep.tech
English
11
25
143
9.1K
Jiageng Mao retweetledi
Dr Singularity
Dr Singularity@Dr_Singularity·
Many people think practical robots are far away, because they can only walk and dance. But thanks to technologies like below (training robots in simulations) we can now teach them various tasks 1000s of times faster than in real life. That’s why humanoid robots will be extremely capable by late 2026. Today they’re still like toys(although advanced). In a year, they’ll become one of the most useful, productivity increasing technologies we’ve ever developed. "PhysWorld, a framework that bridges video generation and robot learning through AI (generated) real-to-sim world modeling."
English
9
20
144
14.2K
Jiageng Mao retweetledi