Jonas retweetledi
Jonas
3 posts

Jonas
@jomueller0
Mathematics and Computer Science @ TU Berlin
Katılım Ocak 2026
115 Takip Edilen5 Takipçiler

@Algomancer arxiv.org/abs/2510.12636 they do pretty much this. I.e. they learn the noising process/latent distribution by parametrizing it via quantile functions (these are roughly the inverse of CDFs) and using then the duality with real valued measures, diffusion/FM are special cases
English

Question for my Flow Matching / Diffusion pilled friends. I've been doing this for years but never seen it on my feed. (I havn't actively looked for it, so if you know any reference papers, kinda just seemed obvious). I use it for my diffusion/flow matching prior vaes, but it works fine in rectified flows / mean flow / etc recipes where your focused on reducing the number of function evaluations.
Do people ever learn the prior/starting distribution? ie where the noise distribution (prior)
is learned rather than fixed to N(0, I). (Quick toy example below from some of my adverserial flow matching experiments so you know what i mean). The intuition being that optimal transport cost depends on the choice of source
distribution. A learned prior reduces the total transport distance by
better aligning with the data geometry.
github.com/Algomancer/Adv…

English





