Mathis Pink

84 posts

Mathis Pink

Mathis Pink

@MathisPink

👀🧠(x) | x ∈ {👀🧠,🤖} PhD student @mpi_sws_ trying to trick rocks into thinking and remembering.

Saarbrücken, Germany Bergabung Ağustos 2020
2.7K Mengikuti367 Pengikut
Tweet Disematkan
Mathis Pink
Mathis Pink@MathisPink·
1/n🤖🧠 New paper alert!📢 In "Assessing Episodic Memory in LLMs with Sequence Order Recall Tasks" (arxiv.org/abs/2410.08133) we introduce SORT as the first method to evaluate episodic memory in large language models. Read on to find out what we discovered!🧵
English
4
16
61
7.2K
Mathis Pink me-retweet
Abhilasha Ravichander
Abhilasha Ravichander@lasha_nlp·
📢 I'm recruiting PhD students at MPI!! Topics include: 1⃣ LLM factuality, reliable info synthesis and reasoning, personalization + applications in real-world inc. education, science 2⃣ Data-centric interpretability 3⃣Creativity in AI, esp scientific applications 🧵1/2
Abhilasha Ravichander tweet media
English
9
112
482
94.7K
Mathis Pink me-retweet
Tim Kietzmann
Tim Kietzmann@TimKietzmann·
Exciting new preprint from the lab: “Adopting a human developmental visual diet yields robust, shape-based AI vision”. A most wonderful case where brain inspiration massively improved AI solutions. Work with @lu_zejin @martisamuser and Radoslaw Cichy arxiv.org/abs/2507.03168
English
5
47
144
17.3K
Mathis Pink me-retweet
Omer Moussa
Omer Moussa@ohmoussa2·
🚨Excited to share our latest work published at Interspeech 2025: “Brain-tuned Speech Models Better Reflect Speech Processing Stages in the Brain”! 🧠🎧 arxiv.org/abs/2506.03832 W/ @mtoneva1 We fine-tuned speech models directly with brain fMRI data, making them more brain-like.🧵
English
1
5
30
1.8K
Mathis Pink me-retweet
Sebastian Michelmann
Sebastian Michelmann@s_michelmann·
Excited to share our new preprint bit.ly/402EYEb with @mtoneva1, @ptoncompmemlab, and @manojneuro), in which we ask if GPT-3 (a large language model) can segment narratives into meaningful events similarly to humans. We use an unconventional approach: ⬇️
English
2
29
94
20.6K
Mathis Pink me-retweet
Karan Goel
Karan Goel@krandiash·
A few interesting challenges in extending context windows. A model with a big prompt =/= "infinite context" in my mind. 10M tokens of context is not exactly on the path to infinite context. Instead, it requires a streaming model that has - an efficient state with fast incremental state updates - strong compression & abstraction of multimodal history - effective at reasoning over state to generate future action (The model needs to be able to run forever.) I dislike the phrase "infinite context" as a matter of taste, since it suggests an extension of the retrieval based paradigm where you reason over context windows in order to answer a query. That's not quite what's going on. Instead, memory is about building useful abstraction over histories. It's nice to think of it as a state with a choice of method for storing/updating information -- this is more in line with a model that always runs and streams in information. Transformers / SSMs are just different ways to choose how to model the state and update it. There are more abstractions left to uncover. There are also new data regimes and training algorithms required to produce these kind of long-term memory models. Interaction is a core part of the way these models operate. We don't seem to have the right evals that measure useful proxies that are in line with this notion of memory. Ideally, it's not just about being able to remember facts -- the model should get better at retaining skills (compound knowledge), and learning new skills over time (learning to learn faster). This is a very interesting direction that I'm personally super excited to be working on.
Chubby♨️@kimmonismus

Sam Altman: 10m context window in months, infinite context within several years

English
2
10
83
10.6K
Mathis Pink me-retweet
François Chollet
François Chollet@fchollet·
Anyway, glad to see that the whole "let's just pretrain a bigger LLM" paradigm is dead. Model size is stagnating or even decreasing, while researchers are now looking at the right problems -- either test-time training or neurosymbolic approaches like test-time search, program synthesis, and symbolic tool use.
English
8
33
508
38.1K
Mathis Pink
Mathis Pink@MathisPink·
@goodside LLMs currently lack episodic memory (EM), which is a form of long-term memory that is different from semantic memory in that it is about single specific encountered sequences. In a recent paper we created a benchmark (SORT) that evaluates temporal order memory as a proxy for EM!
English
1
1
4
216
Riley Goodside
Riley Goodside@goodside·
An LLM knows every work of Shakespeare but can’t say which it read first. In this material sense a model hasn’t read at all. To read is to think. Only at inference is there space for serendipitous inspiration, which is why LLMs have so little of it to show for all they’ve seen.
English
68
64
945
128.1K
Mathis Pink
Mathis Pink@MathisPink·
x.com/MathisPink/sta… We think this is because LLMs do not have parametric episodic memory (as opposed to semantic memory)! We recently created SORT, a new benchmark task that tests temporal order memory in LLMs
Riley Goodside@goodside

An LLM knows every work of Shakespeare but can’t say which it read first. In this material sense a model hasn’t read at all. To read is to think. Only at inference is there space for serendipitous inspiration, which is why LLMs have so little of it to show for all they’ve seen.

English
0
0
4
315
Mathis Pink me-retweet
François Fleuret
François Fleuret@francoisfleuret·
Consider the prompt X="Describe a beautiful house." We can consider two processes to generate the answer Y: (A) sample P(Y | X) or, (B) sample an image Z with a conditional image density model P(Z | X) and then sample P(Y | Z). 1/3
English
5
3
62
13K
Mathis Pink me-retweet
Mathis Pink
Mathis Pink@MathisPink·
4/n💡We find that fine-tuning or RAG do not support episodic memory capabilities well (yet). In-context presentation supports some episodic memory capabilities but at high costs and insufficient length-generalization, making it a bad candidate for episodic memory!
English
1
1
4
4.3K
Mathis Pink me-retweet
Omer Moussa
Omer Moussa@ohmoussa2·
We are so excited to share the first work that demonstrates consistent downstream improvements for language tasks after fine-tuning with brain data!! Improving semantic understanding in speech language models via brain-tuning arxiv.org/abs/2410.09230 W/ @dklakow, @mtoneva1
Omer Moussa tweet media
English
3
8
62
11.4K
Mathis Pink
Mathis Pink@MathisPink·
1/n🤖🧠 New paper alert!📢 In "Assessing Episodic Memory in LLMs with Sequence Order Recall Tasks" (arxiv.org/abs/2410.08133) we introduce SORT as the first method to evaluate episodic memory in large language models. Read on to find out what we discovered!🧵
English
4
16
61
7.2K