Venkatesh Saligrama

63 posts

Venkatesh Saligrama

Venkatesh Saligrama

@Saligrama_Lab

AI Prof. and Amazon Scholar focused on learning under constraints of all forms - feature costs, communication, computation, limited/biased supervision.

Katılım Eylül 2020
46 Takip Edilen121 Takipçiler
Venkatesh Saligrama
Venkatesh Saligrama@Saligrama_Lab·
If AI is going to help produce expert-looking knowledge, evaluation cannot be a one-time act. It must become evolving infrastructure: an ongoing collaboration among humans, models, and the evidence they surface together. Ground Truth is a Process: Not a Dataset! Full deep dive: arxiv.org/abs/2603.05912 Paper appearing in @aclmeeting
English
0
0
0
36
Venkatesh Saligrama
Venkatesh Saligrama@Saligrama_Lab·
The shift is dramatic. Experts who are unreliable one-shot labelers become far more reliable as auditors. Focusing on comparing two concrete cases helped boost expert accuracy on our hidden test set from 60.8% → 90.9%. Ground truth becomes versioned consensus.
English
1
0
0
33
Venkatesh Saligrama
Venkatesh Saligrama@Saligrama_Lab·
We thought the harder problem was building a better AI fact-checker to verify complex research reports. We were wrong. The real bottleneck is the benchmark itself. Ground truth has to become a process, not a static dataset.
Venkatesh Saligrama tweet media
English
1
2
3
168
Venkatesh Saligrama
Venkatesh Saligrama@Saligrama_Lab·
If you will be at #EACL2026, come chat with us about why audio judges don’t actually listen — and how we can fix it 🎧⚡ 🗓️ Wed. Mar 25 · 11:30–13:00 📍 Poster Hall — Session 2 (Oral/Posters A) 📄 Paper & code: github.com/arjunchandra2/…
English
1
0
0
153
Venkatesh Saligrama
Venkatesh Saligrama@Saligrama_Lab·
“Do audio judges actually listen to audio?” 🎧 We find the answer is… no. Meet TRACE: a new judge that pairs lightweight audio tools with an LLM to make genuine audio-aware judgments — at ~3× lower cost ⚡ Excited to share our work at #EACL2026 Findings (Rabat🇲🇦, March 25th)!
Venkatesh Saligrama tweet mediaVenkatesh Saligrama tweet media
English
1
1
0
270
François Chollet
François Chollet@fchollet·
To perfectly understand a phenomenon is to perfectly compress it, to have a model of it that cannot be made any simpler. If a DL model requires millions parameters to model something that can be described by a differential equation of three terms, it has not really understood it, it has merely cached the data.
English
160
153
1.6K
122.8K
Venkatesh Saligrama
Venkatesh Saligrama@Saligrama_Lab·
"Transformers can only transform data." They have no explicit concept of parameters, low-dimensional structures, or relationships. So, how exactly do they solve parametric problems so efficiently? 🤔 We decoded them to find out. Our new #NeurIPS2025 paper tells the story. 🧵👇
Venkatesh Saligrama tweet media
English
0
2
10
1.3K
Venkatesh Saligrama
Venkatesh Saligrama@Saligrama_Lab·
If you are at #NeurIPS2025, come chat with us about how transformers are secretly numerical linear algebra engines! 🗓️ Thursday, Dec 4, 2025 · 4:30 PM – 7:30 PM PST 📍 Poster #3510 (Exhibit Hall C, D, E) 📄 Paper: arxiv.org/abs/2509.19702
English
0
0
0
99
Venkatesh Saligrama
Venkatesh Saligrama@Saligrama_Lab·
We trained linear transformers across different regimes: centralized, distributed, and compute-limited. The surprise? In every single regime, the transformer implicitly discovered the exact same unifying, parameter-free numerical algorithm (which we call EAGLE). 🦅
English
1
0
1
104
Venkatesh Saligrama
Venkatesh Saligrama@Saligrama_Lab·
It is not just pattern matching; it’s automated algorithm discovery.
English
0
0
0
75