Katherine Hermann

385 posts

Katherine Hermann

Katherine Hermann

@khermann_

Research Scientist @GoogleDeepMind | Past: PhD from @Stanford

Katılım Ağustos 2016
1.3K Takip Edilen1.7K Takipçiler
Katherine Hermann retweetledi
Andrew Lampinen
Andrew Lampinen@AndrewLampinen·
Pleased to share that our paper "Representation Biases: Variance is Not Always a Good Proxy for Importance" is now out as Theory/New Concepts paper in eNeuro! Thread:
Andrew Lampinen tweet media
English
2
16
120
10.6K
Katherine Hermann retweetledi
Eghbal Hosseini
Eghbal Hosseini@eghbal_hosseini·
How do diverse context structures reshape representations in LLMs? In our new work, we explore this via representational straightening. We found LLMs are like a Swiss Army knife: they select different computational mechanisms reflected in different representational structures. 1/
Eghbal Hosseini tweet media
English
1
19
85
12.2K
Katherine Hermann retweetledi
Goodfire
Goodfire@GoodfireAI·
LLMs memorize a lot of training data, but memorization is poorly understood. Where does it live inside models? How is it stored? How much is it involved in different tasks? @jack_merullo_ & @srihita_raju's new paper examines all of these questions using loss curvature! (1/7)
Goodfire tweet media
English
11
134
817
191.9K
Katherine Hermann retweetledi
Ilia Sucholutsky
Ilia Sucholutsky@sucholutsky·
🧵🎉 Our mega-paper is finally published in TMLR! We're "Getting Aligned on Representational Alignment" - the degree to which internal representations of different (biological & artificial) information processing systems agree. 🧠🤖🔬🔍 #CognitiveScience #Neuroscience #AI
Ilia Sucholutsky tweet media
English
5
37
150
33.9K
Katherine Hermann retweetledi
Michael C. Mozer
Michael C. Mozer@mc_mozer·
[1/4] As you read words in this text, your brain adjusts fixation durations to facilitate comprehension. Inspired by human reading behavior, we propose a supervised objective that trains an LLM to dynamically determine the number of compute steps for each input token.
Michael C. Mozer tweet media
English
4
10
27
3.1K
Dr Julia Shaw
Dr Julia Shaw@drjuliashaw·
I am going on tour with my new book! Green Crime: Inside the minds of the people destroying the planet and how to stop them. Get your tickets here! 💚 linktr.ee/drjuliashaw
Dr Julia Shaw tweet media
English
85
8
29
9.3K
Katherine Hermann retweetledi
Andrew Lampinen
Andrew Lampinen@AndrewLampinen·
Many representational analyses (implicitly) prioritize signals by the amount of variance they explain in the representations. However, in arxiv.org/abs/2507.22216 we discuss results from our prior work that challenge this assumption; variance != computational importance.
English
1
3
31
1.8K
Katherine Hermann retweetledi
Andrew Lampinen
Andrew Lampinen@AndrewLampinen·
In neuroscience, we often try to understand systems by analyzing their representations — using tools like regression or RSA. But are these analyses biased towards discovering a subset of what a system represents? If you're interested, check out our new commentary! Thread:
Andrew Lampinen tweet media
English
5
61
365
33.8K
Katherine Hermann retweetledi
Aran Nayebi
Aran Nayebi@aran_nayebi·
🚀 New Open-Source Release! PyTorchTNN 🚀 A PyTorch package for building biologically-plausible temporal neural networks (TNNs)—unrolling neural network computation layer-by-layer through time, inspired by cortical processing. PyTorchTNN naturally integrates into the Encoder-Attender-Decoder (EAD) architecture (Chung*, Shen* et al., 2025), which flexibly combines diverse neural networks, motivated by the fact that no single model (Transformer, SSM, RNN) dominates all sequence learning tasks. 🧵👇
GIF
English
1
40
182
21.1K
Katherine Hermann retweetledi
Aran Nayebi
Aran Nayebi@aran_nayebi·
Our first NeuroAgent! 🐟🧠 Excited to share new work led by the talented @rdkeller, showing how autonomous behavior and whole-brain dynamics emerge naturally from intrinsic curiosity grounded in world models and memory. Some highlights: - Developed a novel intrinsic drive (3M-Progress) that better matches the reliable autonomy of animals - First task-optimized model of neural-glial computation - Surprisingly, no linear regression needed: a simple 1-to-1 mapping was enough to pass the NeuroAI Turing Test on whole-brain zebrafish data (~130,000 recorded units), provided you have the right intrinsic drive of course! Check it out! 👇
Reece Keller@rdkeller

1/ I'm excited to share recent results from my first collaboration with the amazing @aran_nayebi and @Leokoz8! We show how autonomous behavior and whole-brain dynamics emerge in embodied agents with intrinsic motivation driven by world models.

English
4
34
168
15.9K
Eliza Kosoy
Eliza Kosoy@ElizaKosoy·
PhDone! 🎓🥳 I let a bunch of kids loose on AI agents for my PhD and guess what? The kids won. 👧🤖 My thesis: “Youth in the Loop” → kids’ exploration + causal reasoning show how to build safer, more aligned, trustworthy AI The future of alignment + T&S? It’s 16 and has opinions
English
7
0
44
1.9K
Katherine Hermann retweetledi
Andrew Lampinen
Andrew Lampinen@AndrewLampinen·
How do language models generalize from information they learn in-context vs. via finetuning? We show that in-context learning can generalize more flexibly, illustrating key differences in the inductive biases of these modes of learning — and ways to improve finetuning. Thread: 1/
Andrew Lampinen tweet media
English
8
148
763
102.4K
Katherine Hermann retweetledi
Kelsey Allen
Kelsey Allen@KelseyRAllen·
Humans can tell the difference between a realistic generated video and an unrealistic one – can models? Excited to share TRAJAN: the world’s first point TRAJectory AutoeNcoder for evaluating motion realism in generated and corrupted videos. 🌐 trajan-paper.github.io 🧵
GIF
English
3
12
63
17.7K
Katherine Hermann
Katherine Hermann@khermann_·
@DynamicWebPaige For CA: Mount Langley (Southern Sierras) and Mount Tallac (Desolation Wilderness) are both really nice
English
0
0
1
129
👩‍💻 Paige Bailey
👩‍💻 Paige Bailey@DynamicWebPaige·
👋 Do any of y'all have long hike recommendations (10+ miles) in the US? Bonus points if they're in California, Texas, or Washington state
👩‍💻 Paige Bailey tweet media
San Francisco, CA 🇺🇸 English
23
1
31
5.9K
Katherine Hermann retweetledi
Thomas Fel
Thomas Fel@thomas_fel_·
Train your vision SAE on Monday, then again on Tuesday, and you'll find only about 30% of the learned concepts match. ⚓ We propose Archetypal SAE which anchors concepts in the real data’s convex hull, delivering stable and consistent dictionaries. arxiv.org/pdf/2502.12892…
Thomas Fel tweet media
English
6
78
354
43.3K
Katherine Hermann retweetledi
Aran Nayebi
Aran Nayebi@aran_nayebi·
Had a lot of fun speaking with @avileddie about the practical challenges of scaling (especially in Embodied AI), NeuroAI, what to expect in the future, and advice for students getting into the field. Check it out here! youtube.com/watch?v=ZRo-fL…
YouTube video
YouTube
English
0
9
33
4.2K
Katherine Hermann retweetledi
Aran Nayebi
Aran Nayebi@aran_nayebi·
1/ 🧵👇 What should count as a good model of intelligence? AI is advancing rapidly, but how do we know if it captures intelligence in a scientifically meaningful way? We propose the *NeuroAI Turing Test*—a benchmark that evaluates models based on both behavior and internal representations. 👉The key principle: given a metric, models should be *at least as good as brains are to each other*:
Aran Nayebi tweet media
English
10
46
158
21.1K
Katherine Hermann retweetledi
Aran Nayebi
Aran Nayebi@aran_nayebi·
Are there fundamental barriers to AI alignment once we develop generally-capable AI agents? We mathematically prove the answer is *yes*, and outline key properties for a "safe yet capable" agent. 🧵👇 Paper: arxiv.org/abs/2502.05934
Aran Nayebi tweet media
English
2
15
54
14.7K