Daniel Johnson

289 posts

Daniel Johnson banner
Daniel Johnson

Daniel Johnson

@_ddjohnson

Member of Technical Staff at @TransluceAI. Building tools to study neural nets and their behaviors. He/him.

San Francisco Katılım Mayıs 2010
956 Takip Edilen2.7K Takipçiler
Daniel Johnson retweetledi
Dami Choi
Dami Choi@damichoi95·
Code for our user modeling project is out now! github.com/TransluceAI/ob… This includes data generation, belief evaluation, and training code for our LatentQA decoders. We also uploaded our datasets and decoder checkpoints on Hugging Face: huggingface.co/collections/Tr…
Transluce@TransluceAI

What do AI assistants think about you, and how does this shape their answers? Because assistants are trained to optimize human feedback, how they model users drives issues like sycophancy, reward hacking, and bias. We provide data + methods to extract & steer these user models.

English
0
8
47
5.5K
Daniel Johnson retweetledi
Transluce
Transluce@TransluceAI·
Why does GPT-5.1 Codex score 6.5% worse than GPT-5 Codex on Terminal-Bench, with the same scaffold? 🧵 GPT-5.1 times out at ~2x the rate of GPT-5. Excluding timeouts, GPT-5.1 wins by 7.2%. We analyzed 256M+ tokens of traces and found this in under an hour. Here’s how 👇
Transluce tweet media
English
2
15
71
8.7K
Daniel Johnson retweetledi
vincent!
vincent!@vvhuang_·
We trained a decoder to read the internal activations of an LLM and answer questions about what the model will think about or do next. We find that this decoder can understand LLM behaviors, even when the model itself is confused! (for instance, if the model has been jailbroken)
vincent! tweet media
Transluce@TransluceAI

Transluce is developing end-to-end interpretability approaches that directly train models to make predictions about AI behavior. Today we introduce Predictive Concept Decoders (PCD), a new architecture that embodies this approach.

English
9
27
104
19.3K
Daniel Johnson retweetledi
Transluce
Transluce@TransluceAI·
Transluce is developing end-to-end interpretability approaches that directly train models to make predictions about AI behavior. Today we introduce Predictive Concept Decoders (PCD), a new architecture that embodies this approach.
GIF
English
2
33
164
34.4K
Daniel Johnson retweetledi
Transluce
Transluce@TransluceAI·
Transluce is running our end-of-year fundraiser for 2025. This is our first public fundraiser since launching late last year.
Transluce tweet media
English
4
22
96
61.4K
Daniel Johnson retweetledi
Dami Choi
Dami Choi@damichoi95·
Have you ever had ChatGPT give you personalized results out of nowhere that surprised you? Here, the model jumped straight to making recommendations in SF, even though I only asked for Korean food!
Dami Choi tweet media
English
1
18
47
6.7K
Daniel Johnson retweetledi
Transluce
Transluce@TransluceAI·
Independent AI assessment is more important than ever. At #NeurIPS2025, Transluce will help launch the AI Evaluator Forum, a new coalition of leading independent AI research organizations working in the public interest. Come learn more on Thurs 12/4 👇 luma.com/i6ekd5s2
English
4
13
68
13.1K
Daniel Johnson retweetledi
Transluce
Transluce@TransluceAI·
What do AI assistants think about you, and how does this shape their answers? Because assistants are trained to optimize human feedback, how they model users drives issues like sycophancy, reward hacking, and bias. We provide data + methods to extract & steer these user models.
English
4
26
87
21.5K
Daniel Johnson retweetledi
Transluce
Transluce@TransluceAI·
Transluce is headed to #NeurIPS2025! ✈️ Interested in understanding model behavior at scale? Join us for lunch on Thursday 12/4 to learn more about our work and meet members of the team: luma.com/8kjfb378
English
1
8
78
25.4K
Daniel Johnson retweetledi
Anthropic
Anthropic@AnthropicAI·
Remarkably, prompts that gave the model permission to reward hack stopped the broader misalignment. This is “inoculation prompting”: framing reward hacking as acceptable prevents the model from making a link between reward hacking and misalignment—and stops the generalization.
Anthropic tweet media
English
38
136
1.5K
460.3K
Daniel Johnson retweetledi
Transluce
Transluce@TransluceAI·
Transluce is partnering with @SWEbench to make their agent trajectories publicly available on Docent! You can now view transcripts via links on the SWE-bench leaderboard.
Transluce tweet media
English
3
14
43
7.5K
Daniel Johnson retweetledi
Transluce
Transluce@TransluceAI·
Can LMs learn to faithfully describe their internal features and mechanisms? In our new paper led by Research Fellow @belindazli, we find that they can—and that models explain themselves better than other models do.
Transluce tweet media
English
5
57
276
67.2K
Daniel Johnson retweetledi
Transluce
Transluce@TransluceAI·
We are excited to welcome Conrad Stosz to lead governance efforts at Transluce. Conrad previously led the US Center for AI Standards and Innovation, defining policies for the federal government’s high-risk AI uses. He brings a wealth of policy & standards expertise to the team.
Transluce tweet media
English
1
9
28
4.1K
Daniel Johnson retweetledi
Shoalstone
Shoalstone@Shoalst0ne·
If you're seriously trying to understand AGI, core concepts you should familiarize yourself with:
Shoalstone tweet media
English
7
7
58
4.1K
Daniel Johnson retweetledi
Transluce
Transluce@TransluceAI·
We’re open-sourcing Docent under an Apache 2.0 license. Check out our public codebase to self-host Docent, peek under the hood, or open issues & pull requests! The hosted version remains the easiest way to get started with one click and use Docent with zero maintenance overhead.
Transluce@TransluceAI

Docent, our tool for analyzing complex AI behaviors, is now in public alpha! It helps scalably answer questions about agent behavior, like “is my model reward hacking” or “where does it violate instructions.” Today, anyone can get started with just a few lines of code!

English
1
13
80
11K
Daniel Johnson retweetledi
Transluce
Transluce@TransluceAI·
At Transluce, we train investigator agents to surface specific behaviors in other models. Can this approach scale to frontier LMs? We find it can, even with a much smaller investigator! We use an 8B model to automatically jailbreak GPT-5, Claude Opus 4.1 & Gemini 2.5 Pro. (1/)
Transluce tweet media
English
5
38
243
41.1K
Mila - Institut québécois d'IA
Exciting news! We're thrilled to announce the appointment of Professor @hugo_larochelle as Mila's new Scientific Director! A deep learning pioneer and former head of Google's AI lab in Montreal, Hugo's leadership will be pivotal in advancing AI for the benefit of all. Read the full press release here: ow.ly/yPlg50WOu6T
Mila - Institut québécois d'IA tweet media
English
12
29
253
23.8K
Daniel Johnson retweetledi
Transluce
Transluce@TransluceAI·
Docent, our tool for analyzing complex AI behaviors, is now in public alpha! It helps scalably answer questions about agent behavior, like “is my model reward hacking” or “where does it violate instructions.” Today, anyone can get started with just a few lines of code!
Transluce tweet media
English
7
36
207
34.7K
Claas Voelcker
Claas Voelcker@c_voelcker·
But now, finally, it is done! You may now all call me Dr. Claas (and then immediately laugh at me for being pretentious enough to use the title). I am super happy/relieved/exhausted to announce that I passed my thesis defense yesterday! #PhDone #mltwitter
Claas Voelcker tweet media
Claas Voelcker@c_voelcker

@QueerinAI Not yet a Dr., don’t jinx it 😁

English
21
1
80
8.9K
Daniel Johnson retweetledi
Séb Krier
Séb Krier@sebkrier·
When some people talk about future AIs, they sometimes jump straight to modelling them as fully independent and sovereign agents; new principals with their own objectives and values. They sometimes skip over how today's models actually work, on the grounds that eventually we’ll get those sovereign entities anyway, so we might as well reason from that endpoint. Fair enough, but once you take that shortcut you immediately face all the usual coordination and resource‑competition problems, because you’ve implicitly posited a second “species.” It's an important frame to look into. But the trouble is that the crucial variable isn’t whether the entities are agentic and autonomous, but what characteristics you assume of them. In the sovereign‑agent frame the AI’s objective function is exogenous: it pursues its own ends. That assumption is doing almost all the work, and it’s arguably unwarranted. If instead you start from existing systems, you see that today’s AIs are delegated, prompt‑conditioned agents. They instantiate goals we hand them, modulated by policy overlays and market incentives, rather than waking up each morning with a personal life plan. Much more useful this way! The “shoggoth behind the mask” meme captures the weirdness of the underlying models, and we should keep an eye on any latent drives and the differences between their cognition and ours. But so far the thing actually executed is still downstream of our instructions. You can imagine a future superintelligent system where you still say, “Build me a factory, but do it within these safety, cost, and emissions constraints,” and the agent’s entire long‑horizon plan remains conditional on that spec. It may spin up sub‑agents, collaborate, iterate, whatever, but the objective and sub-tasks it optimises is still anchored to your prompt plus the surrounding guardrails. You may not be good at specifying what you want, but that's a different issue. That anchoring matters because it flips the strategic picture a bit: instead of planning for 'cohabitation' with alien intelligences (as we might with a population of aliens landing on earth), we plan for an ecosystem of powerful extensions of human intent; extensions that can, if we design them right, also mediate coordination among humans (and AIs). Modelling the future in this 'delegated‑agent frame' opens more design space: we can ask how to stabilise the control surfaces, aggregate conflicting human preferences (the normative part of the alignment problem), and build symbiotic governance structures, instead of assuming inevitable rivalry with a second species. To be clear this is not a given or inevitable, and we still need a lot more work on alignment and the degree to which models robustly follow instructions. But even then I think it's more helpful to start with the assumption that they can be 'pretty aligned' rather than modelling them a second species with a necessary inherent drive for 'survival' - hence why I'm so bullish on the cooperative AI agenda.
Séb Krier tweet mediaSéb Krier tweet media
English
10
20
116
9.6K