Uriel Singer

54 posts

Uriel Singer

Uriel Singer

@urielsinger

Research Scientist @ Meta AI Research

Katılım Ekim 2015
76 Takip Edilen361 Takipçiler
Uriel Singer retweetledi
Hila Chefer
Hila Chefer@hila_chefer·
New research from @bfl_ml 🥳 Meet Self-Flow: our self-supervised framework for image, audio, video & world models 🤖 bfl.ai/research/self-… Do generative models really need DINO to learn strong representations? We propose teaching them directly via a joint framework instead 🧵
Hila Chefer tweet media
English
11
61
273
57.5K
Uriel Singer
Uriel Singer@urielsinger·
[3/3] We systematically study the key modeling/training/sampling knobs and share practical guidance for better quality ✅ and faster generation ⚡—backed by a large-scale sweep of 56 pretrained models and 549 evaluations to map the design space. 📊
English
0
0
3
302
Uriel Singer
Uriel Singer@urielsinger·
[2/3] In our previous paper, Transition Matching: Scalable and Flexible Generative Modeling (arxiv.org/abs/2506.23589), we introduced transition matching—a new generative paradigm. This follow-up goes beyond the concept and asks: which design choices actually matter? 🔍
English
1
0
6
516
Uriel Singer retweetledi
Neta Shaul
Neta Shaul@shaulneta·
After multiple requests for the code of the visuals from my talk about Transition Matching, I made a notebook that reproduces the DTM vs. FM GIF! This demo is a good way to build intuition on how TM and FM differ. github.com/neta93/visual-… @urielsinger
English
0
2
15
1.2K
Uriel Singer retweetledi
Peter Holderrieth
Peter Holderrieth@peholderrieth·
New work: “GLASS Flows: Transition Sampling for Alignment of Flow and Diffusion Models”. GLASS generates images by sampling stochastic Markov transitions with ODEs - allowing us to boost text-image alignment for large-scale models at inference time! arxiv.org/pdf/2509.25170 [1/7]
GIF
English
4
62
259
41K
Uriel Singer retweetledi
Heli Ben-Hamu
Heli Ben-Hamu@helibenhamu·
Excited to share our work Set Block Decoding! A new paradigm combining next-token-prediction and masked (or discrete diffusion) models, allowing parallel decoding without any architectural changes and with exact KV cache. Arguably one of the simplest ways to accelerate LLMs!
English
5
24
115
25.7K
Uriel Singer retweetledi
Neta Shaul
Neta Shaul@shaulneta·
DTM vs FM👇 Lots of interest in how Difference Transition Matching (DTM) connects to Flow Matching (FM). Here is a short animation that illustrates Theorem 1 in our paper: For a very small step size (1/T), DTM converges to an Euler step of FM.
GIF
Neta Shaul@shaulneta

[1/n] New paper alert! 🚀 Excited to introduce 𝐓𝐫𝐚𝐧𝐬𝐢𝐭𝐢𝐨𝐧 𝐌𝐚𝐭𝐜𝐡𝐢𝐧𝐠 (𝐓𝐌)! We're replacing short-timestep kernels from Flow Matching/Diffusion with... a generative model🤯, achieving SOTA text-2-image generation! @urielsinger @itai_gat @lipmanya

English
2
46
327
25K
Uriel Singer retweetledi
Neta Shaul
Neta Shaul@shaulneta·
If you're curious to dive deeper into Transition Matching (TM)✨🔍, a great starting point is understanding the similarities and differences between 𝐃𝐢𝐟𝐟𝐞𝐫𝐞𝐧𝐜𝐞 𝐓𝐫𝐚𝐧𝐬𝐢𝐭𝐢𝐨𝐧 𝐌𝐚𝐭𝐜𝐡𝐢𝐧𝐠 (𝐃𝐓𝐌) and Flow Matching (FM)💡.
Neta Shaul tweet media
Neta Shaul@shaulneta

[1/n] New paper alert! 🚀 Excited to introduce 𝐓𝐫𝐚𝐧𝐬𝐢𝐭𝐢𝐨𝐧 𝐌𝐚𝐭𝐜𝐡𝐢𝐧𝐠 (𝐓𝐌)! We're replacing short-timestep kernels from Flow Matching/Diffusion with... a generative model🤯, achieving SOTA text-2-image generation! @urielsinger @itai_gat @lipmanya

English
2
15
125
14.5K
Uriel Singer retweetledi
moab.arar
moab.arar@ArarMoab·
This paper is awesome. 🔥 Flow-matching for flow-matching! ❌No more coarse-to-fine generation. 🚀Coarse and fine details emerge together during generation. 🏆Results look super promising, especially when you see how the images evolve.
Neta Shaul@shaulneta

[1/n] New paper alert! 🚀 Excited to introduce 𝐓𝐫𝐚𝐧𝐬𝐢𝐭𝐢𝐨𝐧 𝐌𝐚𝐭𝐜𝐡𝐢𝐧𝐠 (𝐓𝐌)! We're replacing short-timestep kernels from Flow Matching/Diffusion with... a generative model🤯, achieving SOTA text-2-image generation! @urielsinger @itai_gat @lipmanya

English
0
2
19
852
Uriel Singer retweetledi
Neta Shaul
Neta Shaul@shaulneta·
Difference Transition Matching (DTM) process is so simple to Illustrate, you can calculate it on a whiteboard! At each step: Draw all lines connecting source and target (shaded) ⬇️ List those intersecting with the current state (yellow) ⬇️ Sample a line from the list (green)
GIF
Neta Shaul@shaulneta

[1/n] New paper alert! 🚀 Excited to introduce 𝐓𝐫𝐚𝐧𝐬𝐢𝐭𝐢𝐨𝐧 𝐌𝐚𝐭𝐜𝐡𝐢𝐧𝐠 (𝐓𝐌)! We're replacing short-timestep kernels from Flow Matching/Diffusion with... a generative model🤯, achieving SOTA text-2-image generation! @urielsinger @itai_gat @lipmanya

English
2
16
133
10K
Uriel Singer
Uriel Singer@urielsinger·
Introducing Transition Matching (TM) — a new generative paradigm that unifies Flow Matching and autoregressive models into one framework, boosting both quality and speed! Thank you for the great collaboration @shaulneta @itai_gat @lipmanya
GIF
Neta Shaul@shaulneta

[1/n] New paper alert! 🚀 Excited to introduce 𝐓𝐫𝐚𝐧𝐬𝐢𝐭𝐢𝐨𝐧 𝐌𝐚𝐭𝐜𝐡𝐢𝐧𝐠 (𝐓𝐌)! We're replacing short-timestep kernels from Flow Matching/Diffusion with... a generative model🤯, achieving SOTA text-2-image generation! @urielsinger @itai_gat @lipmanya

English
2
3
22
3K
Uriel Singer retweetledi
Yaron Lipman
Yaron Lipman@lipmanya·
**Transition Matching** is a new iterative generative paradigm using Flow Matching or AR models to transition between generation intermediate states, leading to an improved generation quality and speed!
GIF
Neta Shaul@shaulneta

[1/n] New paper alert! 🚀 Excited to introduce 𝐓𝐫𝐚𝐧𝐬𝐢𝐭𝐢𝐨𝐧 𝐌𝐚𝐭𝐜𝐡𝐢𝐧𝐠 (𝐓𝐌)! We're replacing short-timestep kernels from Flow Matching/Diffusion with... a generative model🤯, achieving SOTA text-2-image generation! @urielsinger @itai_gat @lipmanya

English
0
19
131
10.8K
Uriel Singer retweetledi
Hila Chefer
Hila Chefer@hila_chefer·
Exciting news from #ICML2025 & #ICCV2025 🥳 - 🥇 VideoJAM accepted as *oral* at #ICML2025 (top 1%) - Two talks at #ICCV2025 ☝️interpretability in the generative era ✌️video customization - Organizing two #ICCV2025 workshops ☝️structural priors for vision ✌️long video gen 🧵👇
Hila Chefer@hila_chefer

VideoJAM is our new framework for improved motion generation from @AIatMeta We show that video generators struggle with motion because the training objective favors appearance over dynamics. VideoJAM directly adresses this **without any extra data or scaling** 👇🧵

English
17
20
188
14K
Uriel Singer retweetledi
Itai Gat
Itai Gat@itai_gat·
Excited to share our recent work on corrector sampling in language models! A new sampling method that mitigates error accumulation by iteratively revisiting tokens in a window of previously generated text. With: @shaulneta @urielsinger @lipmanya Link: arxiv.org/abs/2506.06215
Itai Gat tweet media
English
4
20
88
29.3K
Uriel Singer retweetledi
Hila Chefer
Hila Chefer@hila_chefer·
Beyond excited to share FlowMo! We found that the latent representations by video models implicitly encode motion information, and can guide the model toward coherent motion at inference time Very proud of @ariel__shaulov @itayhzn for this work! Plus, it’s open source! 🥳
Itay Hazan@itayhzn

🧵1/ Text-to-video models generate stunning visuals, but… motion? Not so much. You get extra limbs, objects popping in and out... In our new paper, we present FlowMo -- an inference-time method that reduces temporal artifacts without retraining or architectural changes. 👇

English
9
11
128
45K
Uriel Singer retweetledi
Michael Hassid
Michael Hassid@MichaelHassid·
The longer reasoning LLM thinks - the more likely to be correct, right? Apparently not. Presenting our paper: “Don’t Overthink it. Preferring Shorter Thinking Chains for Improved LLM Reasoning”. Link: arxiv.org/abs/2505.17813 1/n
Michael Hassid tweet media
English
7
35
114
8.4K