OmnAI Lab

32 posts

OmnAI Lab banner
OmnAI Lab

OmnAI Lab

@OmnAI_Lab

OmnAI lab, @SapienzaRoma, Computer Science Department (DI) PI: @_iAc

Rome Inscrit le Ekim 2025
73 Abonnements61 Abonnés
Tweet épinglé
OmnAI Lab
OmnAI Lab@OmnAI_Lab·
1/ Large Language Models leak energy when they hallucinate. We built a training-free method to catch the spill and keep them *grounded*. Our #ICLR2026 paper introduces Spilled Energy for SOTA zero-shot detection. TLDR: Hallucinations violate the probability chain rule.
OmnAI Lab tweet mediaOmnAI Lab tweet media
English
7
31
193
11.5K
OmnAI Lab retweeté
Alessandro Salvatore
Alessandro Salvatore@AleSalvatore00·
Why can't we solve adversarial examples? After a decade of work, neural nets still get fooled by imperceptible noise. We think we finally know the geometric reason why — and it connects to AI alignment. 🧵
Alessandro Salvatore tweet media
English
17
90
801
68.9K
OmnAI Lab retweeté
GLADIA Research Lab
GLADIA Research Lab@GladiaLab·
Language Models are Injective and Hence Invertible (ICLR 2026), aka “pringle paper", is now a public graph on @paradigmainc’s Flywheel In the paper, we show that LMs can be inverted and, contrary to common belief, do not discard information about their inputs at inference time.
GLADIA Research Lab tweet media
GLADIA Research Lab@GladiaLab

LLMs are injective and invertible. In our new paper, we show that different prompts always map to different embeddings, and this property can be used to recover input tokens from individual embeddings in latent space. (1/6)

English
1
18
101
19.9K
OmnAI Lab retweeté
Paradigma
Paradigma@paradigmainc·
introducing Flywheel: the infrastructure for autonomous research.
English
28
70
515
93.6K
OmnAI Lab retweeté
Adrian R. Minut
Adrian R. Minut@adrianrminut·
@OmnAI_Lab What we also find very interesting is: - instruct variants show a higher spillage to hallucination correlation; post-training artifacts? - the math behind does not apply to just language modeling, any sequence-to-sequence task abides by the same rules!
English
0
2
3
213
OmnAI Lab
OmnAI Lab@OmnAI_Lab·
6/ We evaluated LLaMA, Mistral, Gemma, and Qwen3 across nine Q/A and reasoning benchmarks, plus a synthetic algebraic stress test. The method effectively localizes the exact answer token and tests for hallucinations.
English
1
0
13
639
OmnAI Lab
OmnAI Lab@OmnAI_Lab·
1/ Large Language Models leak energy when they hallucinate. We built a training-free method to catch the spill and keep them *grounded*. Our #ICLR2026 paper introduces Spilled Energy for SOTA zero-shot detection. TLDR: Hallucinations violate the probability chain rule.
OmnAI Lab tweet mediaOmnAI Lab tweet media
English
7
31
193
11.5K
OmnAI Lab retweeté
ItalAI
ItalAI@_italai·
AI doesn't just need to learn. It needs to unlearn as well ❌ Removing sensitive data or unsafe behaviors is becoming central to modern computer vision. @muv_workshop brings together researchers working on selective forgetting, safe image/video generation, privacy-preserving & ethically compliant AI. Speakers from @GoogleDeepMind, @MIT_CSAIL & more. Join us in Denver👇 …chine-unlearning-for-vision.github.io @CVPR
ItalAI tweet media
English
0
2
4
307
OmnAI Lab retweeté
Alessio Sampieri
Alessio Sampieri@AlessioSampier1·
Don’t miss MUV @CVPR 2026! 🚨 If your work touches machine unlearning or safe adaptation in vision, this is your venue. 📝 Submit by March 15. #CVPR2026 #MUV #Workshop
Machine Unlearning for Vision @ CVPR26@muv_workshop

Can AI forget? 🧠❌ Join MUV at @CVPR 26 in Denver! 🏔️ Speakers from @GoogleDeepMind, @MIT_CSAIL & more. 📝 Submit by March 15! Organizers: @SapienzaRoma, @MIT, @TU_Muenchen, @_italai and MPI. Details: …chine-unlearning-for-vision.github.io #CVPR2026 #AI #ComputerVision

English
0
4
9
2.5K
OmnAI Lab retweeté
Andrew Ng
Andrew Ng@AndrewYNg·
To all my AI friends: Every time I see you, you raise my temperature parameter. Happy Valentine’s Day! ❤️
English
114
89
2K
121K
OmnAI Lab retweeté
alphaXiv
alphaXiv@askalphaxiv·
Something better than SAE just dropped "Learning a Generative Meta-Model of LLM Activations" Training a diffusion model to understand the activation states of LLMs, and even steer them?! In this paper, they train a diffusion model on 1B+ LLM internal activations to learn what “normal” hidden states look like. Then when you steer a model and its activations get weird, you can denoise/project them back onto the natural manifold keeping the behavior change while staying fluent and stable. This reveals a cleaner concept-like “meta-neurons” inside the learned prior!
alphaXiv tweet media
English
6
33
279
19.1K
OmnAI Lab retweeté
Donato Crisostomi
Donato Crisostomi@DonatoCrisosto1·
we keep hearing that coding is just moving up the abstraction ladder, like swapping manual assembly for compilers. but what's the equivalent for science? if we abstract away experimental work and math execution, what's left? pure intuition? Join the discussion @iclr_conf
Post-AGI Workshop @ ICLR 2026@p_agi_workshop

Assuming AGI becomes widely available, what is the future of science? We are launching the P-AGI workshop at @iclr_conf to define the post-AGI research agenda @DonatoCrisosto1 @teelinsan @valentina__py @pratyusha_PS @ZorahLaehner @EmanueleRodola

English
0
3
8
932