Jonathan Berant

1K posts

Jonathan Berant

Jonathan Berant

@JonathanBerant

NLP at Tel-Aviv University and Google

Katılım Haziran 2011
276 Takip Edilen3.2K Takipçiler
Jonathan Berant
Jonathan Berant@JonathanBerant·
@benediktstroebl The prompt specifies the budget and we truncate once budget is exhausted (but we rarely need to the model mostly follows the instruction)
English
0
0
1
98
Jonathan Berant
Jonathan Berant@JonathanBerant·
Are AI models effective collaborators, or mere assistants awaiting your next command? (arxiv.org/abs/2602.24188) To find out, we make AI collaborate with itself, in private information games: tasks that require sharing private information, like this chess board ordering task.
Jonathan Berant tweet media
English
3
18
130
17.8K
Jonathan Berant retweetledi
Max Chen
Max Chen@maximillianc_·
📣Excited to finally share our latest work on quantifiably adapting model behavior based on unique preferences 📣 We teach language models to adjust their clarification behavior using scalar coefficients and find they can generalize to unseen coefficients at inference time!
Jonathan Berant@JonathanBerant

Newish work (arXived in Dec.): Prompts can be ambig, but handling ambig. is context/user dependent. Sometimes the right thing is to ask a clarifying question, sometimes to give multi. answers, and sometimes to just guess. Can we train models to change their strategy per context?

English
0
1
13
1.8K
Jonathan Berant
Jonathan Berant@JonathanBerant·
Models also generalize to coefficients that never occurred at training time!
Jonathan Berant tweet media
English
1
0
0
173
Jonathan Berant
Jonathan Berant@JonathanBerant·
Newish work (arXived in Dec.): Prompts can be ambig, but handling ambig. is context/user dependent. Sometimes the right thing is to ask a clarifying question, sometimes to give multi. answers, and sometimes to just guess. Can we train models to change their strategy per context?
Jonathan Berant tweet media
English
1
6
27
4.5K
Jonathan Berant
Jonathan Berant@JonathanBerant·
AI systems are also overconfident, terminating dialogues long before exhausting their turn budget - even after explicit reminders.
Jonathan Berant tweet mediaJonathan Berant tweet media
English
1
0
10
850
Jonathan Berant retweetledi
Ben Bogin
Ben Bogin@ben_bogin·
My team @GoogleAI is looking for a 2026 research intern in Mountain View! I will be hiring for a project aimed at improving tool-using and search agents via RL training and data generation. To apply: google.com/about/careers/… + feel free to ping me!
English
7
25
284
20.8K
Jonathan Berant
Jonathan Berant@JonathanBerant·
@egrefen @NidarMMV2 @agarwl_ They probably should have presented this as a tutorial on existing work with extensions, very easy to interpret this as presentation of a new method.
English
1
0
5
482
Edward Grefenstette
Edward Grefenstette@egrefen·
@NidarMMV2 @agarwl_ I my defence... you need to click to expand references on their blog and (on mobile at least) this messes with search.
English
2
0
3
1.6K
Jonathan Berant retweetledi
Samuel AMOUYAL
Samuel AMOUYAL@AmouyalSamuel·
I had a lot of fun working on this with @JonathanBerant @aya_meltzer You can find our paper here: arxiv.org/abs/2510.07141 And by the way, the answer (at least based on the sentence) is yes, you can ignore head injuries. But it's a terrible advice
English
0
2
5
303
Jonathan Berant retweetledi
Samuel AMOUYAL
Samuel AMOUYAL@AmouyalSamuel·
We have more interesting insights in our paper. We believe this is a really exciting direction for humans and LLMs comparison. Extending our framework to more structures and more LLMs will certainly lead to additional insights !
English
1
1
2
238
Jonathan Berant retweetledi
Samuel AMOUYAL
Samuel AMOUYAL@AmouyalSamuel·
We report 3 additional findings: 1. LLMs similarity to humans on GP structures is higher 2. The similarity of the structures' difficulty ordering to humans increases with model size 3. LLM performs better on easy baseline than on the structures if it's not too strong or too weak
Samuel AMOUYAL tweet media
English
1
1
1
203