Minghuan Liu

247 posts

Minghuan Liu

Minghuan Liu

@ericliuof97

Postdoc at @utaustin RPL Lab. Prev: Bytedance Seed Robotics; Sea AI Lab; @ucsd; PhD. @sjtu1896 | Working on Robots and Agents

Katılım Eylül 2016
453 Takip Edilen757 Takipçiler
Sabitlenmiş Tweet
Minghuan Liu
Minghuan Liu@ericliuof97·
The most important thing to highlight is, XHugWBC controls arbitrary humanoids instead of existing ones. We show it generalizes to existing embodiments in a zero-shot manner. No more efforts are required to build a new controller for new humanoids.
Minghuan Liu@ericliuof97

Build your own humanoid robot, XHugWBC can take control. From H-Zero (arxiv.org/abs/2512.00971) to XHugWBC (xhugwbc.github.io), we find that all humanoids' policies, although structure largely vary, can be learned in a single network. Key is: 1) Physics-Consistent Morphological Randomization + 2) Universal Cross-Embodiment Representation Check our project page to see more!

English
3
7
80
11.3K
Minghuan Liu retweetledi
Lukas Ziegler
Lukas Ziegler@lukas_m_ziegler·
How do we make robot policy evaluation reproducible across labs? 📊 Different labs use different lighting, camera angles, task setups, backgrounds, and hardware tweaks. That makes side-by-side policy comparison almost impossible. OpenArm 02 from @enactic_ai is an attempt to fix that. It’s a fully open-source dual-arm platform designed specifically for reproducible evaluation. The idea is simple: standardize the physical setup so results can actually be compared across institutions. On top of that, it introduces AutoEval, a 24/7 real-world evaluation loop with minimal human intervention, building on prior work in automated benchmarking. Instead of manually running trials, policies can be evaluated continuously under consistent conditions. It’s the shift from isolated demo results to shared, comparable benchmarks. If robotics wants faster collective progress, reproducible evaluation infrastructure like this is a necessary step. ~~ ♻️ Join the weekly robotics newsletter, and never miss any news → ziegler.substack.com
English
4
19
98
7.3K
Minghuan Liu
Minghuan Liu@ericliuof97·
@tongzhou_mu Agree that general robot policies should be upgraded with video models. Congrats on the release!
English
0
0
2
71
Tongzhou Mu 🤖🦾🦿
Tongzhou Mu 🤖🦾🦿@tongzhou_mu·
Proud to share what I’ve been working on with my colleagues at Rhoda AI: Direct Video-Action Models (DVA). TL;DR: - We pre-train causal video models from scratch to control robots - They handle complex production tasks for hours without intervention - Only use ~10 hours of robot data How? 🧵👇
Rhoda AI@rhoda_ai_

To bring generalist intelligent robots to the real world, we have to overcome the data scarcity problem. At Rhoda, we are solving it by reformulating robot policies as video generation. Today, we introduce the Direct Video-Action Model (DVA)

English
10
24
206
17.3K
Minghuan Liu retweetledi
Kaifeng Zhang
Kaifeng Zhang@kaiwynd·
Cloth simulation using NVIDIA Newton. Not perfect, but looking good!
English
16
28
460
34.3K
Minghuan Liu
Minghuan Liu@ericliuof97·
Make product like Claude Code such that even they keep banning you you still cannot resist to use it.
English
0
0
4
160
Minghuan Liu retweetledi
Physical Intelligence
Physical Intelligence@physical_int·
To this end, we developed Multi-Scale Embodied Memory (MEM). The key idea: use different modalities to represent memory at different time scales. 📹 For short horizon memory, we developed an efficient video encoder that lets the model remember fine-grained details about its recent interactions. 📜 For long horizon memory, we train the model to summarize events in text, allowing it to remember events for up to 15 min.
Physical Intelligence tweet media
English
2
14
181
28.1K
Minghuan Liu retweetledi
Srishti
Srishti@NieceOfAnton·
This 𝗖𝗟𝗔𝗨𝗗𝗘.𝗺𝗱 file will make you 10x engineer 👇 It combines all the best practices shared by Claude Code creator: Boris Cherny (creator of Claude Code at Anthropic) shared on X internal best practices and workflows he and his team actually use with Claude Code daily. Someone turned those threads into a structured 𝗖𝗟𝗔𝗨𝗗𝗘.𝗺𝗱 you can drop into any project. It includes: • Workflow orchestration • Subagent strategy • Self-improvement loop • Verification before done • Autonomous bug fixing • Core principles This is a compounding system. Every correction you make gets captured as a rule. Over time, Claude's mistake rate drops because it learns from your feedback. If you build with AI daily, this will save you a lot of time.
Srishti tweet media
English
313
1.4K
12K
2.6M
Minghuan Liu
Minghuan Liu@ericliuof97·
Found two things essentially different: 1) OpenClaw provides an agent as a `Persistent Process` to keep watching and is driven by itself (agent system) instead of user inputs 2) OpenClaw provides a new template of a self-modifying/evolving agent system (e.g., changing planning strategies and agent config).
Minghuan Liu@ericliuof97

Amazed by the rapid pace of evolution of agent tools, but also getting confused. Could anyone explain the key difference between OpenClaw, CC/Codex, CC-Cowork, and Manus? Why is OpenClaw so popular, and what's its specialty compared to these existing agent tools (except it's open source)?

English
0
0
4
401
Minghuan Liu retweetledi
Zi-ang Cao
Zi-ang Cao@ziang_cao·
🚀 Introducing CHIP: Adaptive Compliance for Humanoid Control through Hindsight Perturbation! Current humanoids face a trade-off: they are either Agile & Stiff OR Slow & Soft. CHIP breaks this barrier. We enable on-the-fly switching between Compliant (wiping 🧼, collaborative holding 📦) and Stiff (lifting dumbbells 🏋️, opening doors 🚪💪) behaviors—all while maintaining agile skills like running! 🏃💨 Website: nvlabs.github.io/CHIP/ Join me for a deep dive on how CHIP enables adaptive control for complex tasks. 🧵↓
English
10
51
213
23.9K
Minghuan Liu
Minghuan Liu@ericliuof97·
The most important thing to highlight is, XHugWBC controls arbitrary humanoids instead of existing ones. We show it generalizes to existing embodiments in a zero-shot manner. No more efforts are required to build a new controller for new humanoids.
Minghuan Liu@ericliuof97

Build your own humanoid robot, XHugWBC can take control. From H-Zero (arxiv.org/abs/2512.00971) to XHugWBC (xhugwbc.github.io), we find that all humanoids' policies, although structure largely vary, can be learned in a single network. Key is: 1) Physics-Consistent Morphological Randomization + 2) Universal Cross-Embodiment Representation Check our project page to see more!

English
3
7
80
11.3K
Minghuan Liu
Minghuan Liu@ericliuof97·
Amazed by the rapid pace of evolution of agent tools, but also getting confused. Could anyone explain the key difference between OpenClaw, CC/Codex, CC-Cowork, and Manus? Why is OpenClaw so popular, and what's its specialty compared to these existing agent tools (except it's open source)?
English
0
0
0
656
Minghuan Liu retweetledi
Boris Cherny
Boris Cherny@bcherny·
I'm Boris and I created Claude Code. Lots of people have asked how I use Claude Code, so I wanted to show off my setup a bit. My setup might be surprisingly vanilla! Claude Code works great out of the box, so I personally don't customize it much. There is no one correct way to use Claude Code: we intentionally build it in a way that you can use it, customize it, and hack it however you like. Each person on the Claude Code team uses it very differently. So, here goes.
English
1.3K
7K
54.3K
8M