
Yunhao Luo
27 posts

Yunhao Luo
@yluo_y
PhD Student @UMichCSE | Prev @GeorgiaTech @BrownUniversity Embodied AI




What if your robot could plan tasks it has never seen before without ever being retrained? Meet Compositional Visual Planning via Inference-Time Diffusion Scaling (ICLR 2026 🏆) comp-visual-planning.github.io If you are in Rio🇧🇷 visit us! Sat, 04/25/26 6:30-9:00 AM PDT Pavillion 4 #4203

Check out our #ICLR2026 paper Generative View Stitching! I unfortunately couldn’t attend but @MichalStaryy will be presenting our poster tomorrow (Sat) morning at Pavillon 4 PA-#3016. Shoutout to my other collaborators @BoyuanChen0, @gkopanas, and @vincesitzmann!


Our paper "Compositional Diffusion with Guided Search (CDGS)" is an Oral at #ICLR2026! Short-horizon Foundation Models + Compositional Generative Planning + Inference-time Search = CDGS for goal-conditioned long-horizon planning! More details: cdgsearch.github.io 🧵 below





Introducing Large Video Planner (LVP-14B) — a robot foundation model that actually generalizes. LVP is built on video gen, not VLA. As my final work at @MIT, LVP has all its eval tasks proposed by third parties as a maximum stress test, but it excels!🤗 boyuan.space/large-video-pl…




