Luca Eyring

98 posts

Luca Eyring

Luca Eyring

@LucaEyring

@ELLISforEurope PhD student @ExplainableML, Research Intern @InceptiveCom

Munich, Germany Katılım Ekim 2022
1.1K Takip Edilen549 Takipçiler
Sabitlenmiş Tweet
Luca Eyring
Luca Eyring@LucaEyring·
Reward hacking is challenging when fine-tuning few-step Diffusion models. Direct fine-tuning on rewards can create artifacts that game metrics while degrading visual quality. We propose Noise Hypernetworks as a theoretically grounded solution, inspired by test-time optimization.
GIF
English
8
52
356
50.6K
Luca Eyring retweetledi
Oussama Zekri
Oussama Zekri@oussamazekri_·
What if discrete diffusion didn’t have to be stuck with mask or uniform noise? 🤔 In our new paper, we show how to go beyond them, unlocking much richer noising processes. And the empirical results are surprisingly strong! 🚀 🌐 Project Page: oussamazekri.fr/gdds 📑 Paper: arxiv.org/pdf/2603.21342 💻 Code: github.com/ozekri/gdds Thread below 🧵
English
5
19
93
8.2K
Luca Eyring
Luca Eyring@LucaEyring·
Thanks for the acknowledgement and cool work on VFMs! My view of VFM vs HyperNoise would be that in practice VFM can probly achieve stronger and more efficient reward alignment due to additional fine-tuning of the generator. However, its limitation is that it requires real data to train while HyperNoise operates completely data free. Also, fine-tuning the generator could make VFM more prone to reward hacking. For inverse problems, the goal is very strong reward alignment while reward hacking is not rlly a problem. So I fully agree with VFM being better aligned with this setting! For general preference alignment, e.g. in T2I the data requirement might hinder VFMs applicability in practice, although I would be quite curious to see how VFM and HyperNoise compare in that setting, and whether reward hacking occurs. Happy to discuss more about this!
English
0
0
1
24
Abbas Mammadov
Abbas Mammadov@AbbasMammadov11·
Thanks for pointing both of these out, @yuanzhi_zhu! We will definitely cite Noise Hypernetworks and Golden Noise in our next revision, as both are highly relevant to our core premise. To give a bit of context on how VFM differs from Noise Hypernetworks: we extend our method to inverse problems where the input distribution changes, meaning we can't easily rely on a pre-trained model to act as the adapter. Also, because we use a tractable Gaussian variational family, we don't require assumptions on the Lipschitz constant. To compensate for the limited expressivity of the Gaussian, we train both the adapter and the flow map jointly. Thanks again for flagging these!
English
1
0
2
104
Abbas Mammadov
Abbas Mammadov@AbbasMammadov11·
🔊 Make Some Noise! Introducing Variational Flow Maps (VFM) - a principled framework for one-step conditional image generation & reward alignment. No iterative guidance. No hundreds of NFEs. Just learn the right noise 🎯 🚀 Below are one-step generation results from VFM after only 0.5 epoch of reward fine-tuning on ImageNet ⚡ Paper: arxiv.org/abs/2603.07276 Code: github.com/abbasmammadov/… 🧵1/10
Abbas Mammadov tweet media
English
2
18
122
10.3K
Luca Eyring retweetledi
Lukas Thede
Lukas Thede@lukas_thede·
🚨 New paper! arxiv.org/abs/2603.06610 Are you sure your post-trained LLM isn’t forgetting something? Adapting LLMs is known to cause forgetting. We usually measure it via general knowledge benchmarks. But if MMLU doesn’t drop…. are you really fine?
Lukas Thede tweet media
English
1
8
27
3.3K
Luca Eyring retweetledi
Nataniel Ruiz
Nataniel Ruiz@natanielruizg·
Excited to show some surprising inventions on generative multiplayer games we made at Google with Stanford. We call the work MultiGen. I've always been inspired by early studios like id Software with Doom or Blizzard with Warcraft bringing networked video games to the next level. We are at the point in history where we can make strides like them, but for generative games. It's a strange feeling to be in the age of generative video games while still discovering how exactly to train the models and design the tools that make them useful. All of the tools that have been invented for classic game engines need to be redesigned for generative games. For example level and world design is not entirely possible with existing technology. We introduce editable memory to diffusion game engines that allow for design of new levels via a minimap. But we can easily imagine how this can be expanded with different creation tools. The end goal of this research direction is to allow game designers to be able to guide the generation process of their world, at the granularity that they prefer. Editable memory also allows us to add multiplayer to Generative Doom. We were amazed when we saw GameNGen some years ago, and now you can play it live with friends in real-time, on your couch or even online. Shared representations like our editable memory seem like the future for this type of experience. Models are, in some cases, expensive and approximate encoders but great interpolators and extrapolators. Leveraging their strengths lets you have completely new experiences that can be realized now and not in the distant future. This work was started at my previous team and continued in collaboration with Stanford. Congratulations to all for the discoveries.
English
32
79
572
99K
Luca Eyring retweetledi
Luca Eyring retweetledi
Vincent Pauline
Vincent Pauline@vincentpaulinef·
🚨 Looking for a fully self-contained intro to diffusion models that covers both continuous (images) and discrete (text, sequences) data? 🆕 We just released: “Foundations of Diffusion Models in General State Spaces: A Self-Contained Introduction” arxiv : arxiv.org/abs/2512.05092 S/o to @andrea_dittadi for his amazing support & guidance, and huge thanks to @TobiasHppe1, @k_neklyudov, @AlexanderTong7 and @stefanAbauer for their supervision! 🙌 One roadmap for all of diffusion. 🏎️💨 After a few failed posts, broken previews, and getting briefly flagged by X… the full thread's finally out 🤯🧵👇
English
3
21
52
9.7K
Luca Eyring retweetledi
Andrea Dittadi
Andrea Dittadi@andrea_dittadi·
We released a self-contained diffusion intro! We present continuous and discrete side-by-side to highlight the parallel math, and color-coded boxes 🔵🔴🟡 allow you to speed-run the whole thing. Super proud of @vincentpaulinef for an outstanding job!!🎉 arxiv.org/abs/2512.05092
Andrea Dittadi tweet media
English
2
16
103
21.7K
Luca Eyring retweetledi
Antonio Orvieto
Antonio Orvieto@orvieto_antonio·
What an excellent self-contained, formal introduction to diffusion by @vincentpaulinef!!! It's fantastic to see write-ups like this, written with the author's interest in understanding and compressing the literature -- for their own enjoyment and interest. To see patterns. Congrats also to my pal @andrea_dittadi arxiv.org/abs/2512.05092
Antonio Orvieto tweet media
English
2
49
376
35.6K
Luca Eyring
Luca Eyring@LucaEyring·
Diffusion Circle happening this afternoon!
Sander Dieleman@sedielem

📢 Another #NeurIPS, another diffusion circle! Join us to talk about diffusion models on Friday Dec 5 at 3:30PM in San Diego! Bayside terrace outside room 11 (upstairs) ☀️🚢🌊 Please help spread the word, tell your friends! No slides, no talks, we just sit down and chat 🗣️

English
0
0
9
2K
Luca Eyring
Luca Eyring@LucaEyring·
Excited to be in San Diego at #NeurIPS2025 this week. We'll be presenting Noise Hypernetworks at Hall C,D,E #3605 Thu 4 Dec 11am! If you're interested in Diffusion/Flow Matching or Reward Alignment, I'd love to chat! Feel free to DM me if you'd like to connect.
Luca Eyring@LucaEyring

Reward hacking is challenging when fine-tuning few-step Diffusion models. Direct fine-tuning on rewards can create artifacts that game metrics while degrading visual quality. We propose Noise Hypernetworks as a theoretically grounded solution, inspired by test-time optimization.

English
0
9
25
3.3K
Luca Eyring
Luca Eyring@LucaEyring·
@aakaran31 Finally got to read it now, really cool analysis in there! And indeed, I think one might even be able to train like a more general hypernetwork on multiple of these tasks together.
English
0
0
1
44
Aayush Karan
Aayush Karan@aakaran31·
@LucaEyring Super cool work!! I'd be curious to see how these hypernetworks perform when the rewards are measurement errors for e.g. inpainting or deblurring tasks! We had a similar finding that informed noise initializations make a huge difference for these tasks. (arxiv.org/abs/2506.10955)
English
1
0
4
220
Luca Eyring
Luca Eyring@LucaEyring·
Reward hacking is challenging when fine-tuning few-step Diffusion models. Direct fine-tuning on rewards can create artifacts that game metrics while degrading visual quality. We propose Noise Hypernetworks as a theoretically grounded solution, inspired by test-time optimization.
GIF
English
8
52
356
50.6K
Luca Eyring retweetledi
Kalyan
Kalyan@nkalyanv99·
We’re releasing UNI-D², a unified codebase for discrete diffusion language models 🤝🚀 Co-led with @vincentpaulinef and an amazing advisor team: @stefanAbauer, @AlexanderTong7 , @andrea_dittadi, @AMK6610, @KaplFer 🙌 🔗 GitHub: github.com/nkalyanv99/UNI… 📚 Docs: nkalyanv99.github.io/UNI-D2/ Reproduce and extend state-of-the-art baselines with one toolkit. Let’s move beyond autoregressive models and push discrete diffusion together 🧵👇
GIF
English
7
23
109
33.8K
Luca Eyring retweetledi
Vishaal Udandarao
Vishaal Udandarao@vishaal_urao·
🚀 New paper! arxiv.org/abs/2511.16655 Recently, Cambrian-S released models & two benchmarks (VSR & VSC) for “spatial supersensing” in video! We found: 1️⃣ Simple no-frame baseline (NoSense) ~perfectly solves VSR! 2️⃣ Tiny sanity check collapses Cambrian-S perf to 0% on VSC! 🧵👇
Vishaal Udandarao tweet media
English
5
22
122
40.1K
Dimitri von Rütte
Dimitri von Rütte@dvruette·
@gowthami_s indeed, that would be interesting! but so far i haven’t seen anybody be able to make this work
English
1
0
0
74
Dimitri von Rütte
Dimitri von Rütte@dvruette·
most diffusion LLMs out there don’t really do diffusion, they just predict the next token in a randomized order. eventually people will realize that there are much smarter ways to do this.
kalomaze@kalomaze

diffusion lms seem like the kind of thing you'd do when you -want- to point at something new on the architectural front, by raw predisposition, but you aren't inspired in any particular way, so you just shrug and go "idk lets just try to slap diffusion onto discrete spaces lmao"

English
4
3
32
4.2K
Luca Eyring
Luca Eyring@LucaEyring·
Honored to receive one of the Google PhD Fellowships 2025 in ML Foundations for my research on "The Role of the Source Distribution in Generative Modeling: Beyond Fixed Gaussian Noise" :) Grateful to @zeynepakata, Alexey, @TU_Muenchen, @Googleorg, and all my collaborators!
Google.org@Googleorg

🎉 We're excited to announce the 2025 Google PhD Fellows! @GoogleOrg is providing over $10 million to support 255 PhD students across 35 countries, fostering the next generation of research talent to strengthen the global scientific landscape. Read more: goo.gle/43wJWw8

English
2
4
38
8.4K