Sarah Schwettmann

1.7K posts

Sarah Schwettmann banner
Sarah Schwettmann

Sarah Schwettmann

@cogconfluence

Co-founder and Chief Scientist, @TransluceAI, prev @MIT

dessert of the real Katılım Ekim 2015
926 Takip Edilen3.1K Takipçiler
Sarah Schwettmann retweetledi
Transluce
Transluce@TransluceAI·
Why does GPT-5.1 Codex score 6.5% worse than GPT-5 Codex on Terminal-Bench, with the same scaffold? 🧵 GPT-5.1 times out at ~2x the rate of GPT-5. Excluding timeouts, GPT-5.1 wins by 7.2%. We analyzed 256M+ tokens of traces and found this in under an hour. Here’s how 👇
Transluce tweet media
English
2
15
71
8.7K
Sarah Schwettmann retweetledi
Grace Luo
Grace Luo@graceluo_·
We trained diffusion models on a billion LLM activations, and we want you to use them! New preprint: Learning a Generative Meta-Model of LLM Activations Joint work with @feng_jiahai, @trevordarrell, @AlecRad, @JacobSteinhardt. More in thread 🧵
English
30
170
1.3K
188.7K
Sarah Schwettmann retweetledi
Jacob Steinhardt
Jacob Steinhardt@JacobSteinhardt·
Overall, I'm excited to see more people signing on to the bitter lesson, scaling-focused approach to understanding AI. This was the core technical thesis that led me and Sarah to found Transluce, and I hope others will join us in these efforts. x.com/TransluceAI/st…
Transluce@TransluceAI

Announcing Transluce, a nonprofit research lab building open source, scalable technology for understanding AI systems and steering them in the public interest. Read a letter from the co-founders Jacob Steinhardt and Sarah Schwettmann: transluce.org/introducing-tr…

English
0
3
21
3.1K
Sarah Schwettmann
Sarah Schwettmann@cogconfluence·
All @TransluceAI work that I described in my NeurIPS mech interp workshop keynote is now out! ✨ Today we released Predictive Concept Decoders, led by @vvhuang_ Paper: arxiv.org/pdf/2512.15712 Blog: transluce.org/pcd And here's @damichoi95's work on scalably extracting latent representations of users from model internals: transluce.org/user-modeling
Justin Angel@JustinAngel

We can train models on maximizing how well they explain LLMs to humans 🤯@cogconfluence paraphrased. Mechanistic Interpretability Workshop #NeurIPS2025.

English
1
17
88
9.7K
Sarah Schwettmann retweetledi
Transluce
Transluce@TransluceAI·
Transluce is developing end-to-end interpretability approaches that directly train models to make predictions about AI behavior. Today we introduce Predictive Concept Decoders (PCD), a new architecture that embodies this approach.
GIF
English
2
33
164
34.4K
Sarah Schwettmann retweetledi
Dami Choi
Dami Choi@damichoi95·
Have you ever had ChatGPT give you personalized results out of nowhere that surprised you? Here, the model jumped straight to making recommendations in SF, even though I only asked for Korean food!
Dami Choi tweet media
English
1
18
47
6.7K
Sarah Schwettmann retweetledi
Transluce
Transluce@TransluceAI·
Independent AI assessment is more important than ever. At #NeurIPS2025, Transluce will help launch the AI Evaluator Forum, a new coalition of leading independent AI research organizations working in the public interest. Come learn more on Thurs 12/4 👇 luma.com/i6ekd5s2
English
4
13
68
13.1K
Sarah Schwettmann
Sarah Schwettmann@cogconfluence·
My favorite part of @damichoi95’s new paper (alongside 2 new datasets!) is the scaled up investigator pipeline that directly decodes open-ended user representations from model internals end-to-end interp is increasingly promising and I'm excited for more work in this direction
Sarah Schwettmann tweet media
Transluce@TransluceAI

What do AI assistants think about you, and how does this shape their answers? Because assistants are trained to optimize human feedback, how they model users drives issues like sycophancy, reward hacking, and bias. We provide data + methods to extract & steer these user models.

English
0
6
24
4.4K
Sarah Schwettmann
Sarah Schwettmann@cogconfluence·
Excited to share some of our progress in these directions during our lunch talks! You can also find me speaking about: *scalable oversight + indep evaluation @ the FAR.AI alignment workshop 12/1-2 *end-to-end interp pipelines @ the mech interp workshop 12/7
English
0
0
5
242
Sarah Schwettmann
Sarah Schwettmann@cogconfluence·
We've been thinking a lot about: *what are the right measurements to make, and subroutines to automate? *how can we equip the ecosystem to not only make those measurements, but make sense of them? and build collective understanding of AI in a rapidly changing, complex landscape
English
1
0
2
264
Sarah Schwettmann retweetledi
Transluce
Transluce@TransluceAI·
Is your LM secretly an SAE? Most circuit-finding interpretability methods use learned features rather than raw activations, based on the belief that neurons do not cleanly decompose computation. In our new work, we show MLP neurons actually do support sparse, faithful circuits!
Transluce tweet media
English
5
71
315
82.8K
Sarah Schwettmann retweetledi
Transluce
Transluce@TransluceAI·
Transluce is partnering with @SWEbench to make their agent trajectories publicly available on Docent! You can now view transcripts via links on the SWE-bench leaderboard.
Transluce tweet media
English
3
14
43
7.5K
Sarah Schwettmann retweetledi
Cristóbal Valenzuela
Cristóbal Valenzuela@c_valenzuelab·
You have to care
Cristóbal Valenzuela tweet media
English
21
108
647
135.7K
Sarah Schwettmann retweetledi
Transluce
Transluce@TransluceAI·
Can LMs learn to faithfully describe their internal features and mechanisms? In our new paper led by Research Fellow @belindazli, we find that they can—and that models explain themselves better than other models do.
Transluce tweet media
English
5
57
276
67.2K
Sarah Schwettmann retweetledi
Transluce
Transluce@TransluceAI·
We’re open-sourcing Docent under an Apache 2.0 license. Check out our public codebase to self-host Docent, peek under the hood, or open issues & pull requests! The hosted version remains the easiest way to get started with one click and use Docent with zero maintenance overhead.
Transluce@TransluceAI

Docent, our tool for analyzing complex AI behaviors, is now in public alpha! It helps scalably answer questions about agent behavior, like “is my model reward hacking” or “where does it violate instructions.” Today, anyone can get started with just a few lines of code!

English
1
13
80
11K
Sarah Schwettmann retweetledi
Sayash Kapoor
Sayash Kapoor@sayashk·
Agent benchmarks lose *most* of their resolution because we throw out the logs and only look at accuracy. I’m very excited that HAL is incorporating @TransluceAI’s Docent to analyze agent logs in depth. Peter’s thread is a simple example of the type of analysis this enables, but we have already found much more striking examples. We’re validating these results now, and excited to share more soon.
Peter Kirgis@PKirgis

OpenAI claims hallucinations persist because evaluations reward guessing and that GPT-5 is better calibrated. Do results from HAL support this conclusion? On AssistantBench, a general web search benchmark, GPT-5 has higher precision and lower guess rates than o3!

English
3
12
69
15.7K