Tali Dekel

114 posts

Tali Dekel

Tali Dekel

@talidekel

Associate Professor @ Weizmann Institute Research Scientist @ Google DeepMind

Katılım Mayıs 2019
311 Takip Edilen2.1K Takipçiler
Tali Dekel
Tali Dekel@talidekel·
Video models learn so much about physical dynamics, actions, and object interactions 🧠. With DynaEdit, we unlock this knowledge to perform complex, non-rigid edits, without any training 🚀! w/ @vd_kulikov @Roni_Paiss @kusichan @inbar_mosseri @t_michaeli
Vova Kulikov@vd_kulikov

Video editing just got more dynamic! 🚀 Thrilled to share DynaEdit: a training-free, text-based method for non-rigid video editing. Work done during my internship at @GoogleDeepMind with @Roni_Paiss, @kusichan, @inbar_mosseri, @talidekel, @t_michaeli dynaedit.github.io

English
2
7
66
6.1K
Tali Dekel
Tali Dekel@talidekel·
Very happy to share this work! DynVFX started as an academic project, offering an early, research-grade demonstration of the “add X to my video” capability. It’s been exciting to watch this task become a widely used feature in today’s foundation models. Code + paper are out.
Danah Yatim@DanahYatim

Sharing that DynVFX, our framework for augmenting real videos with dynamic content, is accepted to SIGGRAPH Asia 2025!🥳 Who is this framework for? EVERYONE. It's tuning-free, zero-shot, and fully automated – All you need to do is give an input video + simple text describing the new content. What's the trick? enforcing the generation to be content-aware of existing scene anchors via selectively extending the attention mechanism in a T2V DiT model. And code is finally out!: github.com/DynVFX/dynvfx Read the paper for more details: arxiv.org/abs/2502.03621 Website: dynvfx.github.io Grateful for presenting it recently at #MIT and next week at #SIGGRAPHAsia2025! With the amazing @RafailFridman, @omerbartal, @talidekel

English
0
2
12
2.1K
Tali Dekel
Tali Dekel@talidekel·
Image generation? (Almost) solved. Video? Getting there. But image collections — how we actually capture the world — remain largely untouched. It's time to unlock GenAI for this fundamental visual modality. 🌟
Kate Feingold@kate_feingold

Image editing? Video editing? What about image collections? Excited to share our latest work: Match-and-Fuse 🪇 match-and-fuse.github.io Check out how we tackle the largely unexplored task of set-to-set generation in a training-free mask-free manner by leveraging the off-the-shelf 2D matches and the emergent grid prior of T2I models in a joint framework. Work with the amazing @omri_kaduri and @talidekel 🧵 [1/3]

English
1
9
52
7.8K
Tali Dekel retweetledi
Oliver Wang
Oliver Wang@oliver_wang2·
The rollout continues. Veo3 is now available to Gemini Pro users!
Josh Woodward@joshwoodward

Veo 3 dropped about 100 hours ago, and it's been on 🔥🔥🔥 ever since Now, we’re excited to announce: + 71 new countries have access + Pro subscribers get a trial pack of Veo 3 on the web (mobile soon) + Ultra subscribers get the highest # of Veo 3 gens w/ refreshes How to try it… ➡️ Gemini (gemini.google): * Great for everyone - click the Video chip in the prompt bar, and just describe your video * Pro subscribers now get a 10-pack so you can try it * Ultra: MAX limits, daily refresh! ➡️ Flow (flow.google): * Great for AI filmmakers * Pro: 10 gens/month * Ultra: Now 125 gens/month (up from 83)!

English
0
3
27
2.7K
Tali Dekel
Tali Dekel@talidekel·
So much is already possible in image generation that it's hard to get excited. TokenVerse has been a refreshing exception! Disentangling complex visual concepts (pose, lighting, materials, etc.) from a single image — and mixing them across others with plug-and-play ease!
Daniel Garibi@DanielGaribi

Excited to share that "TokenVerse: Versatile Multi-concept Personalization in Token Modulation Space" got accepted to SIGGRAPH 2025! It tackles disentangling complex visual concepts from as little as a single image and re-composing concepts across multiple images into a coherent result. token-verse.github.io #SIGGRAPH2025

English
0
1
28
8.3K
Tali Dekel
Tali Dekel@talidekel·
"Giraffe wearing a neck warmer" 2024: entirely generated video by Veo. 2022: text-driven editing using test time optimization and CLIP by Text2LIVE. quite a stride!
English
1
1
20
1.4K
Tali Dekel retweetledi
Pika
Pika@pika_labs·
Today we launched our Pika 2.0 model. Superior text alignment. Stunning visuals. And ✨Scene Ingredients✨that allow you to upload images of yourself, people, places, and things—giving you more control and consistency than ever before. It’s almost like twelve days worth of gifts in one 💅 Available now at Pika.art
English
120
338
2K
288.4K
Tali Dekel
Tali Dekel@talidekel·
Understanding the inner workings of foundation models is key for unlocking their full potential. While the research community has explored this for LLMs, CLIP, and text-to-image models, it's time to turn our focus to VLMs. Let's dive in! 🌟 vision-of-vlm.github.io
Omri Kaduri@omri_kaduri

🔍 Unveiling new insights into Vision-Language Models (VLMs)! In collaboration with @OneViTaDay & @talidekel, we analyzed LLaVA-1.5-7B & InternVL2-76B to uncover how these models process visual data. 🧵 vision-of-vlm.github.io

English
0
23
151
14.5K
Tali Dekel
Tali Dekel@talidekel·
Working on layered video decomposition for a few years now, I'm super excited to share these results! Casual videos to *fully visible* RGBA layers, even under significant occlusions! Kudos @YaoChihLee, @erika_lu_, Sarah Rumbley, @GeyerMichal, @jbhuang0604, and @forrestercole
Yao-Chih Lee@YaoChihLee

Excited to introduce our new paper, Generative Omnimatte: Learning to Decompose Video into Layers, with the amazing team at Google DeepMind! Our method decomposes a video into complete layers, including objects and their associated effects (e.g., shadows, reflections).

English
1
16
126
11K
Tali Dekel
Tali Dekel@talidekel·
Unlike images, getting customized video data is challenging. Check out how we can customize a pre-trained text-to-video model *without* any video data!
Hila Chefer@hila_chefer

Introducing✨Still-Moving✨—our work from @GoogleDeepMind that lets you apply *any* image customization method to video models🎥 Personalization (DreamBooth)🐶stylization (StyleDrop) 🎨 ControlNet🖼️—ALL in one method! Plus… you can control the amount of generated motion🏃‍♀️ 🧵👇

English
0
3
26
2.2K
Tali Dekel
Tali Dekel@talidekel·
Self-supervised representation learning (DINO) + test time optimization=DINO-tracker! Achieving SOTA tracking results across long range occlusions. Congrats @tnarek99 @assaf_singer and @OneViTaDay on the great work! 🦖🦖
Narek Tumanyan@tnarek99

Excited to present DINO-Tracker (accepted to #ECCV2024)! A novel self-supervised method for long-range dense tracking in video, which harnesses the powerful visual prior of DINO. Project page: dino-tracker.github.io. [1/4] @assaf_singer @OneViTaDay @talidekel

English
1
22
136
12K
Tali Dekel retweetledi
Kartik Chandra (also on Mastodon and Bsky)
Hi friends — I'm delighted to announce a new summer workshop on the emerging interface between cognitive science 🧠 and computer graphics 🫖! We're calling it: COGGRAPH! coggraph.github.io June – July 2024, free & open to the public (all career stages, all disciplines) 🧶
Kartik Chandra (also on Mastodon and Bsky) tweet media
English
4
30
104
54K