

Junru Lin
29 posts




Exploration is key for robots to generalize, especially in open-ended environments with vague goals and sparse rewards. BUT, how do we go beyond random poking? Wouldn't it be great to have a robot that explores an environment just like a kid? Introducing Imagine, Verify, Execute (IVE)! IVE leverages Vision-Language models to • extract semantic scene graphs, • imagine novel scenes, • predict their physical plausibility, and • generate executable sequences. IVE is a memory-guided agentic exploration framework that operates fully automatically, enabling more diverse and meaningful exploration.

Have you ever been bothered by the constraints of fixed-sized 2D-grid tokenizers? We present FlexTok, a flexible-length 1D tokenizer that enables autoregressive models to describe images in a coarse-to-fine manner. flextok.epfl.ch arxiv.org/abs/2502.13967 🧵 1/n




Excited to share that we have recently released the source code for FlexTok, bringing a fresh perspective to tokenization. Code on GitHub: lnkd.in/g4iNJFmU. Project Page: flextok.epfl.ch #FlexTok #Tokenization #MachineLearning #MLResearch #OpenSource #AI





Thrilled to share SG-I2V, a tuning-free method for trajectory-controllable image-to-video (i2v) generation, solely built on the knowledge present in a pre-trained i2v diffusion model ! kmcode1.github.io/Projects/SG-I2… w/ @sherwinbahmani @Dazitu_616 @yash2kant @igilitschenski @DaveLindell









