Ran Cheng

139 posts

Ran Cheng banner
Ran Cheng

Ran Cheng

@RanCheng10

Embodied Intelligence @AntGroup, RobbyAnt. Ex-Head of AI @Midea. Ex Huawei Noah’s Ark Lab. McGill CIM alum.

Markham, Ontario Katılım Ekim 2020
1.9K Takip Edilen362 Takipçiler
Ran Cheng
Ran Cheng@RanCheng10·
@AjdDavison guess they need some in house SLAM team for accurate pose estimation for their UMI data
English
0
0
0
95
Zhengyi “Zen” Luo
Zhengyi “Zen” Luo@zhengyiluo·
Happy Monday! More exciting SONIC releasing incoming.
English
6
32
187
13.9K
Haotian Ye
Haotian Ye@haotian_yeee·
Finally getting to share one of my favorite projects. ICLR Oral! 🏆 It’s so strange how rigid video tokenization is. Think about it: why should a still landscape cost the same amount of tokens as a busy street? We built InfoTok. We went back to basics with Shannon’s information theory to make tokens "adaptive" in a principled way. Its 2.3x better compression and 11x faster inference demonstrates the magic of the old-school theory ✨ Check it out: research.nvidia.com/labs/dir/infot…
English
9
43
287
45.3K
Ran Cheng retweetledi
Oier Mees
Oier Mees@oier_mees·
If you missed @chichengcc's guest lecture on "Robotics: Beyond Algorithms" from my @ETH robot learning course, check it out on YouTube! He shares insights that are rarely taught & hard to learn in academia. 📽️ YouTube: youtu.be/tvFvIEOBKfM 📚 Course: cvg.ethz.ch/lectures/Robot…
YouTube video
YouTube
Oier Mees tweet media
English
9
40
274
34.4K
Ran Cheng
Ran Cheng@RanCheng10·
Time is the only currency you can’t earn more of. For humans, tokens aren’t words — they’re moments of life. Spend them like they matter.
English
0
0
1
53
Ran Cheng
Ran Cheng@RanCheng10·
@DJiafei Human preference can bottleneck robot learning. First learn the world as it is; only then optimize for what humans want.
English
0
0
1
86
Jiafei Duan
Jiafei Duan@DJiafei·
Great article! One thing that feels true about scientific discovery is that we often do not know the right answer when it first appears. We only realize it later, once it becomes so useful that we start using it almost unconsciously in everyday life. For robotics, I often pondering: is the right path really to use LLMs as the backbone for generalist robot models, lifting everything into the semantic space of language? Or is it to condition action generation on video and world-model-style learning? Or is the real answer something else entirely different?
Anirudha Majumdar@Majumdar_Ani

x.com/i/article/2033…

English
2
7
43
8.7K
Ran Cheng
Ran Cheng@RanCheng10·
Robots do not care about success or failure. They care about learning the world. Success is just human preference imposed on top of dynamics. Human preference is not the foundation of robot learning—it is a bottleneck. The robot should first learn the world, then learn what we want.
Anirudha Majumdar@Majumdar_Ani

x.com/i/article/2033…

English
0
0
4
260
Ran Cheng
Ran Cheng@RanCheng10·
Why do some continual learning methods forget everything while others don't? We now have a single number that explains it: Context Channel Capacity (C_ctx). 📐 Zero forgetting ⟺ C_ctx ≥ H(T) 🔺 Impossibility Triangle: zero forgetting + online learning + finite params → pick 2 🧠 HyperNets bypass the triangle entirely by redefining params as functions, not states Validated across 1,130+ experiments on 8 CL methods. C_ctx perfectly predicts who forgets and who doesn't. 📄 arxiv.org/abs/2603.07415
English
0
0
7
203
Ran Cheng
Ran Cheng@RanCheng10·
@bercankilic learn how to forget is very hard, I've tried 1800+ experiments with my agent teams, failed and failed again, here's my failure summaries: arxiv.org/pdf/2603.07415 if you want to waste your time reading it.
English
1
0
1
516
Ran Cheng
Ran Cheng@RanCheng10·
Really interesting result. Do you expect the advantage of paired cross-embodiment data to persist at much larger pretraining scale, or is pairing mainly a data-efficient bridge in the low-target-data regime? Do you think the transferable quantity is best viewed as action equivalence, observation equivalence, or a shared latent task-progress / world-transition representation?
English
2
0
6
1.2K
Ran Cheng retweetledi
Chelsea Finn
Chelsea Finn@chelseabfinn·
Usually, we expect more diverse data >> less diverse data. Cross-embodiment transfer seems to benefit from paired data across embodiments, more so than increasing diversity. Webpage & code: data-analogies.github.io Paper: arxiv.org/abs/2603.06450
Chelsea Finn tweet media
English
19
55
484
40.3K
Ran Cheng
Ran Cheng@RanCheng10·
Among the many startups out there, Sunday Robotics is the only robotics company I believe has the potential to drive a true industrial-level revolution. I’m convinced it will definitely become a great company. If you haven’t invested in Sunday Robotics yet, it’s like missing the peak-era Ford of the ChatGPT age.
English
0
0
2
125
Cheng Chi
Cheng Chi@chichengcc·
We're done demoing. Time to deploy. Robotics isn't just algorithms. It's perf traces, calibrations, vibe ingress ESD tests. It's about making the sensible decision and pragmatic tradeoffs, again and again and again. ...
Tony Zhao@tonyzzhao

We raised $165M at a $1.15B valuation to stop doing demos. 2026 is about 1) deployment and 2) research. We will start shipping Memo with our new frontier models in a few months. Our series-B is led by Coatue, with Thomas Laffont joining the board. ->🧵

English
15
20
256
24.9K
Ran Cheng retweetledi
Siyuan Huang
Siyuan Huang@siyuanhuang95·
You might have seen the WuBOT performing at the 2026 Spring Festival Gala; however, most high-dynamic extreme motions you see are executed by overfitted tracking policies. Until now, training a unified policy capable of performing various extreme motions with a high success rate remained an unsolved challenge. We spent an entire year digging into the barrier between general tracking and extreme physical behaviors. After burning through dozens of G1 robots, we finally identified the bottleneck of learning and physical executability. With these discoveries, we developed OmniXtreme: the first general policy that can execute diverse extreme motions, including consecutive flips, extreme balancing, and even breakdancing with rapid contact switches! This capability is achieved by pre-training a flow-based generative control policy and then post-training with actuation-aware residual RL for complex physical dynamics—a step we found critical for successful real-world transfer. This work is a joint collaboration with @UnitreeRobotics. Together, we are pushing the physical limits of humanoid robots. It is incredibly exciting to see a general "robot gymnast" and "robot breakdancer" come to life! It was also our first time publishing a paper with XingXing, which was an enlightening experience. The model checkpoints are now released—we welcome you to play with them! 📦 📄 Paper: arxiv.org/abs/2602.23843 🌐 Project: extreme-humanoid.github.io 💻 Code: github.com/Perkins729/Omn…
English
32
140
722
89.5K
Ran Cheng
Ran Cheng@RanCheng10·
@Koven_Yu awesome work! does it support friction?
English
0
0
0
65
Hong-Xing (Koven) Yu
Hong-Xing (Koven) Yu@Koven_Yu·
🤩Video world models are cool, but it is cooler if they can simulate any 3D physical actions in real time! Introducing RealWonder⚡️: Now you can simulate 3D physical action (robot actions, 3D forces, force fields, etc.) consequences from a single image in real time! 🧵1/6
English
7
46
274
28K