Sabitlenmiş Tweet
Eric Cai
91 posts

Eric Cai retweetledi

[Major life updates] 🎉
After 4 incredible years of my PhD at @UW @uwcse with @fox_dieter17849 and @RanjayKrishna, I'm joining @NUSComputing as an Assistant Professor this August, under the Presidential Young Professorship scheme!
More details 🧵👇

English
Eric Cai retweetledi
Eric Cai retweetledi

Great to have @Jesse_Y_Zhang visiting us @IRVLUTD today!
He shared his journey toward generalist robotics reward models (RoboCLIP, ReWiND, Robometer), followed by a great buffet with the lab.




English
Eric Cai retweetledi
Eric Cai retweetledi

We’re releasing OmniReset, a framework for training robot policies using large-scale RL and diverse resets for contact-rich, dexterous manipulation.
OmniReset pushes the frontier of robustness and dexterity, without any reward engineering or demonstrations.
Try the policies yourself in our interactive simulator! weirdlabuw.github.io/omnireset/
(1/N 🧵)
English
Eric Cai retweetledi
Eric Cai retweetledi
Eric Cai retweetledi
Eric Cai retweetledi

Full episode dropping soon!
Geeking out with @prodarhan @KarlPertsch on PolaRiS: Scalable Real-to-Sim Evaluations for Generalist Robot Policies polaris-evals.github.io
Co-hosted by @chris_j_paxton @DJiafei
English
Eric Cai retweetledi

Pretrained diffusion/flow policies are powerful — but brittle at deployment.
We introduce RFS, a data-efficient RL framework that:
• steers latent noise for global adaptation
• applies residual actions for precise local correction
Works in sim and real-world dexterous manipulation 🖐️🤖
👉📄 Paper + videos: entongsu.github.io/rfs/
English
Eric Cai retweetledi

Data collection is still the bottleneck for imitation learning in robotics—slow, tedious, costly and require robot access.
Introducing RoboCade 🎮🤖: a platform that gamifies remote robot data collection, making it accessible, scalable, and fun.
robocade.github.io
🧵👇
English
Eric Cai retweetledi

Happy to share that I’ve joined forces with @chris_j_paxton and @micoolcho as a new co-host of RoboPaper! Excited to interview outstanding researchers and spotlight great work through the podcast 🎙️
RoboPapers@RoboPapers
Full episode dropping soon! Geeking out with @mangahomanga on Gen2Act: Human Video Generation in Novel Scenarios enables Generalizable Robot Manipulation homangab.github.io/gen2act/ Co-hosted by @chris_j_paxton @DJiafei
English
Eric Cai retweetledi

Excited to introduce PolaRiS, a real-to-sim recipe for turning short real-world videos into high fidelity simulation environments for scalable and reliable zeroshot generalist policy evaluation.
polaris-evals.github.io
(1/N 🧵)
English
Eric Cai retweetledi

When what to my wondering eyes should appear, but a roving tree chased by a robot reindeer…🤖 Happy howl-idays, Huskies, from Spot & your friends in the @UW @uwengineering #UWAllen Robotics Lab! Have a roaring good time over break, and see you next year.🦖(Watch to the end🐕!)
English
Eric Cai retweetledi

Imitation learning is great, but needs us to have (near) optimal data. We throw away most other data (failures, evaluation data, suboptimal data, undirected play data), even though this data can be really useful and way cheaper! In our new work - RISE, we show a simple way to *use all of this non-optimal data to robustify imitation learning* with minimal requirements beyond BC.
Key idea: use non-expert data to learn how to *recover* back to expert data with a minimal frills offline RL that works under sparse data coverage. Allows usage of *all* available data, not just expert data - never throw your data away!
Paper: arxiv.org/abs/2510.19495
Website: uwrobotlearning.github.io/RISE-offline/
A 🧵(1/10)
English
Eric Cai retweetledi

How can we create a single navigation policy that works for different robots in diverse environments AND can reach navigation goals with high precision?
Happy to share our new paper, "VAMOS: A Hierarchical Vision-Language-Action Model for Capability-Modulated and Steerable Navigation"!
📜 Paper: arxiv.org/abs/2510.20818
🌐 Website: vamos-vla.github.io
English
Eric Cai retweetledi

Punchline: World models == VQA (about the future)!
Planning with world models can be powerful for robotics/control. But most world models are video generators trained to predict everything, including irrelevant pixels and distractions. We ask - what if a world model only predicted the semantic information necessary for decision-making?
Introducing Semantic World Models (SWM). Given an observation and an action sequence, SWMs cast modeling as answering textual questions about the future outcome resulting from the actions. Recasting world modeling as a VQA problem lets us directly leverage the pretrained knowledge and machinery of VLMs for generalizable modeling. We had a lot of fun thinking about how this work helps connect these two seemingly very different fields of study - VLMs and world models! 🧵(1/6)
Paper: arxiv.org/abs/2510.19818
Fun demo: weirdlabuw.github.io/swm
English
Eric Cai retweetledi

I'll be joining the faculty @JohnsHopkins late next year as a tenure-track assistant professor in @JHUCompSci
Looking for PhD students to join me tackling fun problems in robot manipulation, learning from human data, understanding+predicting physical interactions, and beyond!



English
Eric Cai retweetledi







