Ryan Tabrizi
80 posts

Ryan Tabrizi
@ryan_tabrizi
phding @columbia



📢 Eyeline Labs and Netflix’s latest research, 🎥Vista4D🎥, accepted at #CVPR2026 as a 🌟Highlight🌟 paper, advances virtual cinematography and scene control with video generation. 🎥 Vista4D synthesizes the dynamic scene represented by an input video from novel camera trajectories and viewpoints by grounding video generation in a 4D point cloud. Our method maintains geometric and physical plausibility under imprecise 4D reconstruction of real-world videos. 🎥 Vista4D unlocks video reshooting beyond camera control. By directly editing the 4D point cloud, our method preserves scene information from casual captures and enables 4D scene editing and recomposition. This work is part of the ongoing research and development at @eyelinestudios and @netflix, and we look forward to seeing its techniques and workflows adopted in future productions. ✊ Kudos to the team: @kuanhenglin, Zhizheng Liu, @pablosalamancal, @yash2kant, @RyanBurgert, @Yuancheng_Xu0, @Koichi_N_, Yiwei Zhao, @zhoubolei, @micahgoldblum, @debfx, @realNingYu 📰 Paper: arxiv.org/pdf/2604.21915 🌐 Project: eyeline-labs.github.io/Vista4D/ ⌨️ Code: github.com/Eyeline-Labs/V…










LLMs are injective and invertible. In our new paper, we show that different prompts always map to different embeddings, and this property can be used to recover input tokens from individual embeddings in latent space. (1/6)







our new system trains humanoid robots using data from cell phone videos, enabling skills such as climbing stairs and sitting on chairs in a single policy (w/ @redstone_hong @junyi42 @davidrmcall)


our new system trains humanoid robots using data from cell phone videos, enabling skills such as climbing stairs and sitting on chairs in a single policy (w/ @redstone_hong @junyi42 @davidrmcall)


Reimagine reality. reve.com





