Gal Chechik

184 posts

Gal Chechik

Gal Chechik

@GalChechik

A Sr director of AI research at NVIDIA and a CS Prof. at Bar-Ilan U. I study learning for reasoning and perception.

Katılım Temmuz 2018
394 Takip Edilen1K Takipçiler
Gal Chechik retweetledi
Jason Peng
Jason Peng@xbpeng4·
It's amazing to see how far ProtoMotions has come since it's first release. If you are looking for a feature-rich and scalable framework that can train controllers on massive datasets, then checkout ProtoMotions!
Chen Tessler@ChenTessler

At @nvidia, we built ProtoMotions to help us, and researchers world-wide, innovate quickly without compromising on applicability. We're proud to announce ProtoMotions3 -- our biggest release yet! 🧵👇

English
2
10
88
8.6K
Gal Chechik retweetledi
Chen Tessler
Chen Tessler@ChenTessler·
At @nvidia, we built ProtoMotions to help us, and researchers world-wide, innovate quickly without compromising on applicability. We're proud to announce ProtoMotions3 -- our biggest release yet! 🧵👇
English
8
55
270
52K
Gal Chechik retweetledi
Ori Malca
Ori Malca@Orimalca·
🎉 I am excited to present our new paper! Our paper improves personalization of text-to-image models, by adding one special cleaning step on top of existing personalized models. With just a single gradient update (~4 seconds on an NVIDIA H100 GPU) and a single image of the target concept, our method improves both text alignment and image alignment. For example, it improves LoRA by (+7% / +14%). This is achieved by adding new loss terms and taking into account the prompt and seed. This work was done together with @dvir_samuel and @GalChechik. 🌐 Paper page: …ery-visual-concept-learning.github.io 📄 arXiv paper: arxiv.org/abs/2508.09045 More details in the comments below.
Ori Malca tweet media
English
4
10
17
983
Nir Weingarten
Nir Weingarten@NirWeingarten·
@GalChechik Very exciting. Inference time compute is the future! I wonder how this method would work when optimizing facial features instead of Dino? Can it be used for character consistency?
English
1
0
0
36
Gal Chechik retweetledi
🇮🇱 Noam Katz
🇮🇱 Noam Katz@NoamKatz_·
This morning, I had the pleasure of attending #EMTech Europe 2025 in Athens, an international conference on emerging technologies. Sr. Director of @NVIDIA, Israeli @GalChechik , gave a fascinating talk on the future of #AI, moderated by @yanpal7 of @kathimerini_gr , highlighting its transformative impact across industries in our lives. Innovation which is defining our future.
🇮🇱 Noam Katz tweet media
English
1
5
19
729
Gal Chechik retweetledi
Guy Lutsker
Guy Lutsker@GLutsker·
We have a new and revised GluFormer manuscript! We expanded our analyses considerably: now showing that our AI model for CGM can identify individuals at higher risk of declining glycemic control before it happens, and can predict long-term diabetes & cardiovascular mortality.
Guy Lutsker tweet media
English
1
13
36
7.6K
Gal Chechik retweetledi
Yoad Tewel
Yoad Tewel@YoadTewel·
🚀 Excited to release the code and demo for ConsiStory, our #SIGGRAPH2024 paper! No fine-tuning needed — just fast, subject-consistent image generation! Check it out here 👇 Code: github.com/NVlabs/consist… Demo: build.nvidia.com/nvidia/consist…
AK@_akhaliq

Nvidia presents ConsiStory Training-Free Consistent Text-to-Image Generation paper page: huggingface.co/papers/2402.03… enable Stable Diffusion XL (SDXL) to generate consistent subjects across a series of images, without additional training.

English
1
33
137
22.5K
Gal Chechik retweetledi
Chen Tessler
Chen Tessler@ChenTessler·
MaskedMimic pre-trained model public release 🧑‍🎄 github.com/NVlabs/ProtoMo… Some info in the thread on how to play with the model 1/
Chen Tessler@ChenTessler

Excited to share our latest work! 🤩 Masked Mimic 🥷: Unified Physics-Based Character Control Through Masked Motion Inpainting Project page: research.nvidia.com/labs/par/maske… with: Yunrong (Kelly) Guo, @ofirnabati, @GalChechik and @xbpeng4. @SIGGRAPHAsia (ACM TOG). 1/ Read along! 😃

English
4
23
161
20.1K
Rohit Girdhar
Rohit Girdhar@_rohitgirdhar_·
Cc @GalChechik since you were wondering what we’d been up to since the emu video work we were just talking about at ECCV 😊
English
1
0
2
327
Rohit Girdhar
Rohit Girdhar@_rohitgirdhar_·
Super excited to share MovieGen: new SOTA media generation system! When we started, I didn’t think we’d get this far this quickly. But turns out a simplified approach (flow matching) paired with scaling up model size and data, indeed works amazingly well! Details in the paper 😀
AI at Meta@AIatMeta

🎥 Today we’re premiering Meta Movie Gen: the most advanced media foundation models to-date. Developed by AI research teams at Meta, Movie Gen delivers state-of-the-art results across a range of capabilities. We’re excited for the potential of this line of research to usher in entirely new possibilities for casual creators and creative professionals alike. More details and examples of what Movie Gen can do ➡️ go.fb.me/kx1nqm 🛠️ Movie Gen models and capabilities Movie Gen Video: 30B parameter transformer model that can generate high-quality and high-definition images and videos from a single text prompt. Movie Gen Audio: A 13B parameter transformer model that can take a video input along with optional text prompts for controllability to generate high-fidelity audio synced to the video. It can generate ambient sound, instrumental background music and foley sound — delivering state-of-the-art results in audio quality, video-to-audio alignment and text-to-audio alignment. Precise video editing: Using a generated or existing video and accompanying text instructions as an input it can perform localized edits such as adding, removing or replacing elements — or global changes like background or style changes. Personalized videos: Using an image of a person and a text prompt, the model can generate a video with state-of-the-art results on character preservation and natural movement in video. We’re continuing to work closely with creative professionals from across the field to integrate their feedback as we work towards a potential release. We look forward to sharing more on this work and the creative possibilities it will enable in the future.

English
2
6
69
5.9K
Gal Chechik
Gal Chechik@GalChechik·
Interesting #ECCV2024 keynote on distribution shift. @sanmikoyejo discussed interpolation and extrapolation. There is a 3rd case: Composition. Interpolate for each component but extrapolate the combination. Can we do better with composition than worst-case extrapolation?
English
1
0
10
740