Iacopo Masi

3.3K posts

Iacopo Masi banner
Iacopo Masi

Iacopo Masi

@_iAc

computer scientist, professor, researcher in computer vision (teaching machines to see), philosopher and ex-basketball player, scuba diver, human being!

Hope always Home... Katılım Mayıs 2008
285 Takip Edilen216 Takipçiler
Iacopo Masi retweetledi
Iacopo Masi retweetledi
Stefano Ermon
Stefano Ermon@StefanoErmon·
Drifting models as score-based models!
Chieh-Hsin (Jesse) Lai@JCJesseLai

[1/D] 🤔 What are drifting models really connected to? 📢 Our new paper, A Unified View of Drifting and Score-Based Models, shows that the bridge to score-based models is clear and precise (w/ team and @mittu1204, @StefanoErmon, @MoleiTaoMath)! ✍️ Main takeaway: drifting is more closely connected to score-based (diffusion) modeling than it may first appear! 🔗 arxiv.org/abs/2603.07514 🎯 Here’s why: Drifting’s mean-shift moves a sample toward the kernel-weighted average of nearby samples. Score function points toward regions of higher density. So both describe local directions that push samples toward where data is denser. We show that this link is exact for Gaussian kernels (Section 4.1): 📌drifting’s mean-shift = a rescaled score-matching field between the Gaussian-smoothed data and model distributions — the vector field underlying score matching (Tweedie!). 📌This also clarifies the bridge to Distribution Matching Distillation (DMD): both use score-based transport directions, but only differ in how the score is realized—drifting does so nonparametrically through kernel neighborhoods, whereas DMD relies on a pretrained diffusion teacher. 🤔 So what happens for the default Laplace kernel used in drifting models? Let’s look below 👇

English
0
12
146
19.3K
Iacopo Masi retweetledi
Alessandro Salvatore
Alessandro Salvatore@AleSalvatore00·
Why can't we solve adversarial examples? After a decade of work, neural nets still get fooled by imperceptible noise. We think we finally know the geometric reason why — and it connects to AI alignment. 🧵
Alessandro Salvatore tweet media
English
17
90
801
68.9K
Iacopo Masi retweetledi
tensorqt
tensorqt@tensorqt·
we're officially live with the open beta of Flywheel. Flywheel is Paradigma's vision on what autonomous research will run on. We've built all of this pre-funding, so expect the pace to accelerate significantly. Super eager to hear your feedback and your ideas. I'll be live tweeting some of my experiments with Flywheel in a few minutes. We're just getting started.
Paradigma@paradigmainc

introducing Flywheel: the infrastructure for autonomous research.

English
10
20
149
15.4K
Iacopo Masi
Iacopo Masi@_iAc·
@mathtician @OmnAI_Lab Yes the gist is that there are hidden degrees of freedom at each autoregressive step that the model can choose but these quantities in principle should be equal across two time steps to model properly the energy of a sentence E(x1,..,xN).
English
0
0
1
18
Mathtician
Mathtician@mathtician·
@OmnAI_Lab Really nice! Idk why we expect the spilled energy at each token to be 0, though, since the NLL of a sequence is the sum of spilled energies plus some O(1) boundary terms. Maybe all spilled energies should be the avg entropy/token? The histograms in Fig. 3 have some positive bias.
English
2
0
1
56
Iacopo Masi retweetledi
OmnAI Lab
OmnAI Lab@OmnAI_Lab·
1/ Large Language Models leak energy when they hallucinate. We built a training-free method to catch the spill and keep them *grounded*. Our #ICLR2026 paper introduces Spilled Energy for SOTA zero-shot detection. TLDR: Hallucinations violate the probability chain rule.
OmnAI Lab tweet mediaOmnAI Lab tweet media
English
7
31
193
11.5K
Iacopo Masi retweetledi
Adrian R. Minut
Adrian R. Minut@adrianrminut·
@OmnAI_Lab What we also find very interesting is: - instruct variants show a higher spillage to hallucination correlation; post-training artifacts? - the math behind does not apply to just language modeling, any sequence-to-sequence task abides by the same rules!
English
0
2
3
213
Iacopo Masi retweetledi
AI Papers arXiv
AI Papers arXiv@SciFi·
Spilled Energy in Large Language Models Adrian Robert Minut, Hazem Dewidar, Iacopo Masi arxiv.org/abs/2602.18671 [𝚌𝚜.𝙰𝙸 𝚌𝚜.𝙲𝙻]
AI Papers arXiv tweet media
Català
0
1
3
143
Iacopo Masi retweetledi
Haider.
Haider.@slow_developer·
Terence Tao says AI is shifting mathematics from isolated case studies to large-scale projects Instead of spending months on a single problem, researchers can now analyze thousands at once As "citizen mathematics" grows and AI starts delivering real value, the field is opening up far beyond just PhDs
English
20
29
152
9.3K
Iacopo Masi retweetledi
Alessio Sampieri
Alessio Sampieri@AlessioSampier1·
The Call for Papers for Machine Unlearning for Vision @CVPR is officially open! We’re looking for work on making vision models forget — safely, efficiently, and at scale. Consider submitting! #CVPR #Unlearning #xAI #CV
Machine Unlearning for Vision @ CVPR26@muv_workshop

🚨 New workshop at #CVPR2026: MUV — Machine Unlearning for Vision As vision models scale and move into real-world use, the ability to remove concepts or behaviors after training is becoming increasingly important. Join us: 🔗 …chine-unlearning-for-vision.github.io @CVPR

English
0
3
7
3.5K
Iacopo Masi retweetledi
Machine Unlearning for Vision @ CVPR26
We welcome submissions on topics including: • Machine unlearning & data removal • Concept erasure & model editing • Safety, robustness & reliability in vision • Debiasing and governance of foundation models • Evaluation protocols for unlearning 📝 Submit by March 15th!
English
0
1
4
290
Iacopo Masi retweetledi
Alessio Sampieri
Alessio Sampieri@AlessioSampier1·
Don’t miss MUV @CVPR 2026! 🚨 If your work touches machine unlearning or safe adaptation in vision, this is your venue. 📝 Submit by March 15. #CVPR2026 #MUV #Workshop
Machine Unlearning for Vision @ CVPR26@muv_workshop

Can AI forget? 🧠❌ Join MUV at @CVPR 26 in Denver! 🏔️ Speakers from @GoogleDeepMind, @MIT_CSAIL & more. 📝 Submit by March 15! Organizers: @SapienzaRoma, @MIT, @TU_Muenchen, @_italai and MPI. Details: …chine-unlearning-for-vision.github.io #CVPR2026 #AI #ComputerVision

English
0
4
9
2.5K
Iacopo Masi retweetledi
Andrew Akbashev
Andrew Akbashev@Andrew_Akbashev·
A really dangerous situation. Too many submissions. Too many generated papers. Little responsibility. 1. In 2026, more than 24,000 submissions were made to the International Conference on Machine Learning (ICML). It’s TWO times more than in 2025. To fight it, the organizers now require researchers to pay $100 for every subsequent paper. 2. LLM adoption has increased researcher productivity by 90% (there’s a recent paper in Science). 3. The number of papers is becoming far too high. Submissions to arXiv have risen by 50% since 2022. 4. There are simply not enough reviewers. Plus, many scientists no longer want to invest precious time in it for free. 5. We can’t easily identify AI-made papers from the genuine ones. __ Important words from Paul Ginsparg, a co-founder of arXiv: “AI slop frequently can’t be discriminated just by looking at abstract, or even by just skimming full text. This makes it an “existential threat” to the system.” Basically, we’re getting closer to the tipping point. 📍 Many professors blame the AI. But the problem is likely elsewhere: 1. Without a sufficient number of papers, many PIs can’t get funded. They have to prove their credibility to reviewers. Their proposals have to rely on prior publications. In many countries, there are some informal (or even formal) expectations for how many papers a group with a certain size has to publish to survive (funding-wise). 2. Our students / postdocs need papers if they want to be hired in faculty roles. Yes, some departments hire people with few publications. But the majority still want to ensure their faculty can get funded. If funding is partly a function of papers, this is used in decision-making. 3. The number of papers is important if you want to get high-level awards. Many of them are not given because you published one paper (even if it’s great). They are given because you made a meaningful CONTRIBUTION to the field. How do you make it? Publish more papers. 4. Tenure promotions in many places take the number of your papers into account (often indirectly). Your tenure may get delayed if you don’t publish enough. Not everywhere, but for many mid- to low-ranked universities this story is more or less the same. + There are many more to mention. 📍My opinion: Much of this is rooted in how funding is distributed. There is a strong correlation between the requirements at a university and the funding acquisition criteria. If funding were based ONLY on the quality of published papers, universities would hire people for the quality of their science. If funding agencies strongly discouraged publishing too many papers, universities wouldn’t expect numbers from faculty during promotions. And some supervisors wouldn’t pressure students and postdocs to publish unfinished studies and low-quality data. Yes, we need good detectors of fake papers. But we also need the right policies and better funding allocation criteria.
Andrew Akbashev tweet media
English
94
378
1.4K
193K
Iacopo Masi
Iacopo Masi@_iAc·
👋🏼Hi Fellas, if you work on machine unlearning for vision and related topics, please consider taking a look at our brand new @CVPR 26 workshop 🦾 …chine-unlearning-for-vision.github.io with @AlessioSampier1 @bardh95 @GalassoFab10 @materzynska and Bernt Schiele 🙏🏼
Machine Unlearning for Vision @ CVPR26@muv_workshop

🚨 New workshop at #CVPR2026: MUV — Machine Unlearning for Vision As vision models scale and move into real-world use, the ability to remove concepts or behaviors after training is becoming increasingly important. Join us: 🔗 …chine-unlearning-for-vision.github.io @CVPR

English
0
2
5
3.6K