Peter Potaptchik

34 posts

Peter Potaptchik

Peter Potaptchik

@PPotaptchik

DPhil student at Oxford https://t.co/JH0l4u7wHv

Oxford Katılım Temmuz 2019
314 Takip Edilen284 Takipçiler
Sabitlenmiş Tweet
Peter Potaptchik
Peter Potaptchik@PPotaptchik·
✨🚀Reward alignment just hit a major milestone🚀✨ Controlling generative models is usually bottlenecked by expensive trajectory rollouts 🐢🐌 or massive particle ensembles 🐢🐌 Meta Flow Maps (MFMs) eliminate both We extend consistency models and flow maps into the stochastic regime—enabling efficient generative control Is that a text-to-image model you see below? Nope. It’s just ImageNet with inference-time steering 🤯🤯🤯 Want unbiased off-policy fine-tuning ? We got that too 📜arxiv.org/abs/2601.14430 🌐meta-flow-maps.github.io 🧵1/10
Peter Potaptchik tweet media
English
4
16
136
23.4K
Peter Potaptchik retweetledi
Peter Potaptchik
Peter Potaptchik@PPotaptchik·
✨🚀Reward alignment just hit a major milestone🚀✨ Controlling generative models is usually bottlenecked by expensive trajectory rollouts 🐢🐌 or massive particle ensembles 🐢🐌 Meta Flow Maps (MFMs) eliminate both We extend consistency models and flow maps into the stochastic regime—enabling efficient generative control Is that a text-to-image model you see below? Nope. It’s just ImageNet with inference-time steering 🤯🤯🤯 Want unbiased off-policy fine-tuning ? We got that too 📜arxiv.org/abs/2601.14430 🌐meta-flow-maps.github.io 🧵1/10
Peter Potaptchik tweet media
English
4
16
136
23.4K
Peter Potaptchik
Peter Potaptchik@PPotaptchik·
@msalbergo But wait, is there cool theory too? Oh boy yes!🤤 We've got: 🔹Connections to SOC 🔹Esscher transform interpretation 🔹Higher-order cumulants 🔹Optimal control variates 🔹Connections to Weighted Flow Matching 🔹Extensions to Flow Maps 📄: arxiv.org/abs/2512.21829 (5/5)
English
1
1
6
1.2K
Peter Potaptchik
Peter Potaptchik@PPotaptchik·
🎆⭐️You thought 2025 would end quietly? Think again. One last exciting update: Tilt Matching!⭐️🎆 We propose a simple, scalable algorithm to sample unnormalized densities and fine-tune generative models 📄: arxiv.org/abs/2512.21829 With @brianlee_lck @msalbergo (1/5)
Peter Potaptchik tweet media
English
1
9
63
10.3K
Peter Potaptchik retweetledi
Tyler Farghly
Tyler Farghly@tylerfarghly·
Our paper is @ NeurIPS!! TLDR: we study how inductive bias in score matching shapes sample geometry - Argue score approximation ≈ density smoothing in log-domain - Prove log-domain smoothing preserves data geometry - Different kernels create different interpolating geometries
English
1
2
10
428