Nitay Calderon

327 posts

Nitay Calderon

Nitay Calderon

@NitCal

PhD candidate @TechnionLive | Google Research | NLP

เข้าร่วม Haziran 2022
770 กำลังติดตาม437 ผู้ติดตาม
Nitay Calderon รีทวีตแล้ว
Omer Nahum
Omer Nahum@omer6nahum·
Do LLMs have motivation? Motivation is a key lens for explaining human behavior. As LLM behavior becomes more human-like, a natural question arises: could it help understand model behavior too? With @AsaelSklar @GoldsteinYAriel @roireichart 📄 Paper: arxiv.org/pdf/2603.14347 1/5
Omer Nahum tweet media
English
3
16
48
2.6K
Nitay Calderon รีทวีตแล้ว
Gal Kesten Pomeranz
Gal Kesten Pomeranz@KestenGal·
Protein repeat detection is hard: repeated segments are often mutated and only approximately similar. Yet PLMs can still detect them well. But How? Check out our new preprint: "Induction Meets Biology: Mechanisms of Repeat Detection in Protein Language Models"
Gal Kesten Pomeranz tweet media
English
1
16
44
5.1K
Nitay Calderon รีทวีตแล้ว
Zorik Gekhman
Zorik Gekhman@zorikgekhman·
New paper 🚨 We know that reasoning helps when step-by-step solutions are natural, for example in math, code, and multi-hop factual QA. But why should it help with factual recall, where no complex reasoning steps are needed? 1/🧵
Zorik Gekhman tweet media
English
3
16
90
13.4K
Nitay Calderon
Nitay Calderon@NitCal·
@AndrewLampinen Oh wow, definitely a very relevant and interesting work. Ill make sure to add a discussion of it in the newer version.
English
1
0
1
49
Andrew Lampinen
Andrew Lampinen@AndrewLampinen·
@NitCal Cool! You might like our work on latent learning: x.com/AndrewLampinen… — we similarly suggest that models fail to effectively use information when it needs to be recalled in a sufficiently different format, inspired by some LM findings but also other phenomena, e.g. in RL
Andrew Lampinen@AndrewLampinen

we argue that parametric learning methods are too tied to the explicit training task, and fail to effectively encode latent information relevant to possible future tasks, and we suggest that this explains a wide range of findings, from navigation to the reversal curse. 3/

English
1
0
14
1.3K
Nitay Calderon รีทวีตแล้ว
AK
AK@_akhaliq·
Google presents Empty Shelves or Lost Keys? Recall Is the Bottleneck for Parametric Factuality paper: huggingface.co/papers/2602.14…
AK tweet media
English
5
13
52
11.3K
Nitay Calderon
Nitay Calderon@NitCal·
So, what can help recall? Thinking helps LLMs access the knowledge they already store. It recovers 40–65% of encoded-but-not-directly-known facts, with the largest gains for rare facts and reverse questions. This resembles the human “tip-of-the-tongue” effect. Bottom line: Given that encoding in frontier LLMs is nearing saturation while substantial headroom remains for recall, future improvements are likely to come not from scaling but from better utilization of existing knowledge. Paper: arxiv.org/abs/2602.14080
Nitay Calderon tweet media
English
0
1
4
217
Nitay Calderon
Nitay Calderon@NitCal·
[6/7] The reversal curse is when LLMs know "A is B" but can't answer "What is B?" If bidirectional knowledge were missing, reverse questions would be hard in any format. But in multiple-choice, reverse questions are as easy as direct ones. The problem, again, is recall.
Nitay Calderon tweet media
English
1
1
4
238
Nitay Calderon รีทวีตแล้ว
Nitay Calderon รีทวีตแล้ว