
🎥 Introducing Split-then-Merge: A new video composition framework! This approach enables the composition of any foreground video with any background video. Unlike conventional methods that rely on annotated datasets or handcrafted rules, Split-then-Merge (StM) splits a large unlabeled corpus of videos into dynamic foreground and background layers, then merges them to learn how dynamic subjects interact with diverse scenes. Work done in collaboration with team members at @Google: Du Tran (@dutran) , Yujia Chen (@IssacCyj) , Prof. Ming-Hsuan Yang (@MingHsuanYang), Vincent Chu: and my advisor at UIUC (@siebelschool): Prof. James M. Rehg (@RehgJim). I will be attending NeurIPS, San Diego and would be happy to chat more! 🔗Project Webpage: split-then-merge.github.io 📄Paper: arxiv.org/abs/2511.20809










