Sabitlenmiş Tweet
Marion Lepert
27 posts

Marion Lepert
@marionlepert
PhD Student @StanfordAILab | BS/MS @Stanford, Olympian
Katılım Ekim 2022
381 Takip Edilen689 Takipçiler

@zuwang95 Thank you! In Fig 5, the Ours (no overlay) ablation does already correspond to Masquerade with no overlay + cotraining. The training pipeline was identical except that the video data used did not have overlays.
English

Cool work! I’m particularly curious about one of the ablation studies. In Fig. 5, what would the success rate look like for no overlay + co-training? That might help disentangle the individual contributions of overlay and co-training to the final performance.
Marion Lepert@marionlepert
Introducing Masquerade 🎭: We edit in-the-wild videos to look like robot demos, and find that co-training policies with this data achieves much stronger performance in new environments. ❗Note: No real robots in these videos❗It’s all 💪🏼 ➡️ 🦾 🧵1/6
English

When googling the authors to tag them I realized this is an all-female author list, which made me even more excited...
Big shout out to female rising stars @marionlepert @jiaying_fang0 and of course their advisor @leto__jean
English
Marion Lepert retweetledi

This is one of the coolest ideas using EPIC-KITCHENS in a long while...
We've all been waiting to be replaced by robots! At least this is now done in the generative space...
Great work by @marionlepert @jiaying_fang0 @leto__jean @StanfordIPRL .. congrats!
arxiv.org/abs/2508.09976
English

Check out our full paper: "Masquerade: Learning From In-the-wild Human Videos Using Data-Editing"
Paper: arxiv.org/pdf/2508.09976
Project page: masquerade-robot.github.io
Grateful for my collaborators @jiaying_fang0 and @leto__jean!
🧵6/6
English

@JulienRineau_ We did not, but Rovi-Aug (closely related work for robot-to-robot transfer) did. They found they could avoid overlaying the virtual robot at inference by randomizing the lighting of the robot overlays during training.
English

@marionlepert Great work! Curious if you tested the performance without overlaying the virtual robot at inference time?
English

@ChaoyiPan Thank you! Yes, although I think Phantom builds more closely off of Rovi-Aug, the authors' follow-up work on their original Mirage paper.
English

@marionlepert Congrats @marionlepert ! Really exited to see cross-embodiment work. Remind me of Mirage arxiv.org/pdf/2402.19249 lol
English

Really grateful for amazing collaborators Jiaying Fang and @leto__jean.
🧵9/9
English

Check out our full paper: "Phantom: Training Robots without Robots Using Only Human Videos"
Website: phantom-human-videos.github.io
Paper: phantom-human-videos.github.io/static/phantom…
Our work builds on Rovi-Aug (rovi-aug.github.io), awesome work led by @Lawrence_Y_Chen and @Chenfeng_X.
🧵8/9
English
