AE Studio retweetledi

Our new work: A frozen language model can describe its own internal features more accurately than the system that labeled them.
Language models compute things they don't talk about. They solve problems using internal steps they never show you. We built a lens that lets the model look at its own computations and tell you what it sees, in plain language, more accurately than the humans who labeled those computations in the first place.
We trained a tiny adapter, d+1 parameters, on top of a frozen model. It takes activation vectors and maps them into the model’s own embedding space so the model can describe what those vectors mean in natural language. The computation stays the same. The interface becomes legible.
The adapter outperforms the labels it was trained on: 71% generation scoring accuracy vs 63% for the supervision itself at 70B scale.
The model captures structure in the relationship between vectors and semantics that noisy one-off labels miss.
Most of the effect comes from a single learned bias vector. One d-dimensional vector accounts for ~85% of the total improvement.
It acts as a prior over valid explanations that puts the model in a regime where internal structure can be expressed coherently, and the activation vector selects the specific meaning.
This generalizes across model families, layers, and from monosemantic training data to polysemantic inference.
On multi-hop reasoning tasks, the adapter extracts bridge entities the model never verbalizes.
“The author of The Republic was born in the city of” produces “Athens” with no mention of Plato. The residual stream still contains “Plato,” and the adapter reads it out at ~91% detection.
The hidden reasoning step is there. You can read it.
As models scale, self-interpretation keeps improving even after capability saturates. The gap between what the model knows and what it can report about its own internal state keeps closing.
This connects to our endogenous steering resistance (ESR) work (x.com/juddrosenblatt…). When you steer a model with an unrelated latent, it can recognize the deviation mid-generation and restart with a better answer. “Wait, I made a mistake.” We identified specific latents that activate during off-topic drift and causally drive this correction. The model monitors its own trajectory and intervenes on it.
Meanwhile, @uzaymacar et al. at Anthropic just showed the complementary piece (x.com/uzaymacar/stat…). They inject concept vectors into the residual stream and ask whether the model detects an injected thought.
The model detects the perturbation and often identifies the concept, with 0% false positives across prompts.
They trace a circuit. Over 100k “evidence carrier” features in early post-injection layers collectively tile the perturbation space, each detecting deviations along a preferred direction.
No small subset is sufficient. The coverage is distributed and redundant. These carriers suppress downstream “gate” features (~200 of them) that implement a default No response.
The gates show an inverted-V activation pattern: maximally active when unsteered, suppressed at both positive and negative extremes. A genuine anomaly detector that fires on “normal” and quiets when anything unusual is happening in any direction.
The capability emerges specifically from contrastive preference training (DPO). SFT alone doesn't produce it. The contrastive structure forces the model to represent the difference between what it produces and what it should produce.
That comparison builds the self-model. Every data domain is individually sufficient and none is necessary: the introspective circuit is a general consequence of contrastive learning, not an artifact of any specific training category.
The capability is also massively underelicited. Ablating the refusal direction boosts detection from 10.8% to 63.8%. The circuitry exists and post-training actively suppresses it. This parallels our ESR finding: the self-monitoring is already there, and lightweight interventions surface it.
Their bias vector result mirrors ours. A single trained bias on MLP output: +75% detection, +55% introspection on held-out concepts, 0% false positive increase. Two independent labs, different methods, different models, same architectural insight from one learned vector. The bias vector is effective but narrow. General introspection requires broader training recipes.
There's a consistent picture across these 3 papers. Models represent meaning internally, notice when those representations get perturbed, and correct course. The capability was already there, and what was missing was just a way to read it out. Generation scoring gives you that.
A model’s claim about an internal feature can be checked against behavior, and those checks become training signal.
For alignment, this means self-description becomes something you can optimize directly.
The pieces are already there: internal representations and circuits, with a simple interface that connects them.
SelfIE Adapters: arxiv.org/abs/2602.10352
ESR: arxiv.org/abs/2602.06941
Anthropic work: arxiv.org/abs/2603.21396
SelfIE Code: github.com/agencyenterpri…

English





