Gal Chechik
184 posts

Gal Chechik
@GalChechik
A Sr director of AI research at NVIDIA and a CS Prof. at Bar-Ilan U. I study learning for reasoning and perception.

At @nvidia, we built ProtoMotions to help us, and researchers world-wide, innovate quickly without compromising on applicability. We're proud to announce ProtoMotions3 -- our biggest release yet! 🧵👇








🚀Introducing SISO – a plug-and-play approach for image personalization using just one image!

🚀 Excited to share OmnimatteZero: Training-Free Real-Time Omnimatte with Video Diffusion Models! 📄 Paper: arxiv.org/abs/2503.18033 🌐 Project: dvirsamuel.github.io/omnimattezero.… 🧵👇



🎉 I'm happy to share that our paper, Make It Count, has been accepted to #CVPR2025! A huge thanks to my amazing collaborators - @YoadTewel, @SegevHilit , @hirscheran, @RoyiRassin, and @GalChechik! 🔗 Paper page: make-it-count-paper.github.io Excited to share our key findings!


We have a new and revised GluFormer manuscript! We expanded our analyses considerably: now showing that our AI model for CGM can identify individuals at higher risk of declining glycemic control before it happens, and can predict long-term diabetes & cardiovascular mortality.



Nvidia presents ConsiStory Training-Free Consistent Text-to-Image Generation paper page: huggingface.co/papers/2402.03… enable Stable Diffusion XL (SDXL) to generate consistent subjects across a series of images, without additional training.

Excited to share our latest work! 🤩 Masked Mimic 🥷: Unified Physics-Based Character Control Through Masked Motion Inpainting Project page: research.nvidia.com/labs/par/maske… with: Yunrong (Kelly) Guo, @ofirnabati, @GalChechik and @xbpeng4. @SIGGRAPHAsia (ACM TOG). 1/ Read along! 😃


🎥 Today we’re premiering Meta Movie Gen: the most advanced media foundation models to-date. Developed by AI research teams at Meta, Movie Gen delivers state-of-the-art results across a range of capabilities. We’re excited for the potential of this line of research to usher in entirely new possibilities for casual creators and creative professionals alike. More details and examples of what Movie Gen can do ➡️ go.fb.me/kx1nqm 🛠️ Movie Gen models and capabilities Movie Gen Video: 30B parameter transformer model that can generate high-quality and high-definition images and videos from a single text prompt. Movie Gen Audio: A 13B parameter transformer model that can take a video input along with optional text prompts for controllability to generate high-fidelity audio synced to the video. It can generate ambient sound, instrumental background music and foley sound — delivering state-of-the-art results in audio quality, video-to-audio alignment and text-to-audio alignment. Precise video editing: Using a generated or existing video and accompanying text instructions as an input it can perform localized edits such as adding, removing or replacing elements — or global changes like background or style changes. Personalized videos: Using an image of a person and a text prompt, the model can generate a video with state-of-the-art results on character preservation and natural movement in video. We’re continuing to work closely with creative professionals from across the field to integrate their feedback as we work towards a potential release. We look forward to sharing more on this work and the creative possibilities it will enable in the future.





