Jonas

3 posts

Jonas

Jonas

@jomueller0

Mathematics and Computer Science @ TU Berlin

Katılım Ocak 2026
115 Takip Edilen5 Takipçiler
Jonas retweetledi
Moritz Weckbecker
Moritz Weckbecker@MWeckbecker·
1/ We found a new way to misalign an entire AI agent network by compromising just one agent. It works through subliminal messaging — no malicious content in any message — so current defenses can't detect it. We call it Thought Virus. 🧵
Moritz Weckbecker tweet media
English
19
38
211
57.9K
Jonas
Jonas@jomueller0·
@alz_zyd_ Well there just is a difference between a textbook that is written as a reference and one that's written to teach you that topic?
English
0
0
1
12
alz
alz@alz_zyd_·
Math textbooks are written in a pointlessly obtuse way. Gemini does an incomparably better job. My professional opinion is that all undergrads learning real analysis should give up reading baby Rudin, and simply learn analysis from Gemini instead
alz tweet mediaalz tweet mediaalz tweet media
English
163
78
968
336K
Jonas
Jonas@jomueller0·
@Algomancer arxiv.org/abs/2510.12636 they do pretty much this. I.e. they learn the noising process/latent distribution by parametrizing it via quantile functions (these are roughly the inverse of CDFs) and using then the duality with real valued measures, diffusion/FM are special cases
English
0
0
0
19
Adam Hibble
Adam Hibble@Algomancer·
Question for my Flow Matching / Diffusion pilled friends. I've been doing this for years but never seen it on my feed. (I havn't actively looked for it, so if you know any reference papers, kinda just seemed obvious). I use it for my diffusion/flow matching prior vaes, but it works fine in rectified flows / mean flow / etc recipes where your focused on reducing the number of function evaluations. Do people ever learn the prior/starting distribution? ie where the noise distribution (prior) is learned rather than fixed to N(0, I). (Quick toy example below from some of my adverserial flow matching experiments so you know what i mean). The intuition being that optimal transport cost depends on the choice of source distribution. A learned prior reduces the total transport distance by better aligning with the data geometry. github.com/Algomancer/Adv…
Adam Hibble tweet media
English
22
20
283
26.1K