Luca Eyring
98 posts

Luca Eyring
@LucaEyring
@ELLISforEurope PhD student @ExplainableML, Research Intern @InceptiveCom









🚀 Diffusion too slow? Fix it in a few steps. 📢 Introducing NVIDIA FastGen — a plug-and-play research library for turning slow diffusion models into high-quality few-step generators. ⚡ What’s inside: • Consistency & MeanFlow (CM, sCM, TCM, MeanFlow) • Distribution Matching (DMD, f-Distill, LADD) • Long-video generation (CausVid, Self-Forcing) • Fine-tuning & KD (SFT, CausalSFT, KD, Causal KD) 🧠 Includes: 📷 EDM, DiT, SD 1.5, SDXL, Flux 🎬 WAN (T2V / I2V / VACE), CogVideoX, Cosmos Predict2 ✨ One unified interface. Research-ready. Apache-2.0. 🔗 Blog: nvda.ws/3LARhFy 💻 Code: github.com/NVlabs/FastGen


Why you should probe more than just the final layer of your Vision Transformer to maximize performance. 🧵👇






📢 Another #NeurIPS, another diffusion circle! Join us to talk about diffusion models on Friday Dec 5 at 3:30PM in San Diego! Bayside terrace outside room 11 (upstairs) ☀️🚢🌊 Please help spread the word, tell your friends! No slides, no talks, we just sit down and chat 🗣️

Reward hacking is challenging when fine-tuning few-step Diffusion models. Direct fine-tuning on rewards can create artifacts that game metrics while degrading visual quality. We propose Noise Hypernetworks as a theoretically grounded solution, inspired by test-time optimization.











diffusion lms seem like the kind of thing you'd do when you -want- to point at something new on the architectural front, by raw predisposition, but you aren't inspired in any particular way, so you just shrug and go "idk lets just try to slap diffusion onto discrete spaces lmao"

🎉 We're excited to announce the 2025 Google PhD Fellows! @GoogleOrg is providing over $10 million to support 255 PhD students across 35 countries, fostering the next generation of research talent to strengthen the global scientific landscape. Read more: goo.gle/43wJWw8

