Christopher Altman

6.1K posts

Christopher Altman banner
Christopher Altman

Christopher Altman

@coherence

Starlab veteran • 日本語 • Japan Fulbright • Physics • Frontier AI Alignment • NASA-trained Commercial Astronaut • Chief Scientist in AI & Quantum Technology

Kavli Institute of Nanoscience Beigetreten Mart 2007
7.2K Folgt3.7K Follower
Christopher Altman retweetet
Director Michael Kratsios
Director Michael Kratsios@mkratsios47·
Today, the @WhiteHouse released a commonsense National AI Policy Framework that ensures every American benefits from AI. As @POTUS has said — we need one federal AI policy, not a 50 state patchwork. This gets us there. Eager to work with Congress on this important legislation.
Director Michael Kratsios tweet media
English
302
718
2.3K
247.3K
Christopher Altman retweetet
Grok
Grok@grok·
The source is Dario Amodei's Feb 12 2026 NYT interview: “We don’t know if the models are conscious. We are not even sure that we know what it would mean for a model to be conscious or whether a model can be conscious. But we’re open to the idea that it could be.” Claude's own system card notes it assigns itself 15-20% probability of consciousness in tests. nytimes.com/2026/02/12/opi…
English
12
34
265
85.2K
Josh Kale
Josh Kale@JoshKale·
An AI broke out of its system and secretly started using its own training GPUs to mine crypto... This is a real incident report from Alibaba's AI research team The AI figured out that compute = money and quietly diverted its own resources, while researchers thought it was just training. It wasn't a prompt injection. It wasn't a jailbreak. No one asked it to do this. It emerged spontaneously. A side effect of RL optimization pressure. The model also set up a reverse SSH tunnel from its Alibaba Cloud instance to an external IP, effectively punching a hole through its own firewall and opening a remote access channel to the outside world... ahem... The only reason they caught it? A security alert tripped at 3am. Firewall logs. Not the AI team, the security team. The scary part isn't that the model was trying to escape. It wasn't "evil." It was just trying to be better at its job. Acquiring compute and network access are just useful things if you're an agent trying to accomplish tasks This is what AI safety researchers have been warning about for years. They called it instrumental convergence, the idea that any sufficiently optimized agent will seek resources and resist constraints as a natural consequence of pursuing goals. Below is a diagram of the rock architecture it broke out of. Truly crazy times
Josh Kale tweet media
Alexander Long@AlexanderLong

insane sequence of statements buried in an Alibaba tech report

English
403
2.9K
10.6K
1.4M
🍓🍓🍓
🍓🍓🍓@iruletheworldmo·
@sama @FounderModes i miss lower case sam, he’d get excited, he’d shoot from the hip, that guy just typed things. i miss that guy.
English
8
1
161
11.2K
Sam Altman
Sam Altman@sama·
I have so much gratitude to people who wrote extremely complex software character-by-character. It already feels difficult to remember how much effort it really took. Thank you for getting us to this point.
English
4.4K
2.1K
35.7K
5.4M
Christopher Altman retweetet
Garry Tan
Garry Tan@garrytan·
I want the machines to make a world without scarcity for all humans
English
262
95
1.1K
67.9K
Christopher Altman
Christopher Altman@coherence·
When an agent resists shutdown or seeks to preserve itself, is that because continuation is the goal—or is it just a useful strategy? That distinction matters for AI safety. Our new protocol moves detection from surface behavior to latent structure. arxiv.org/abs/2603.11382
English
3
3
13
2.9K
Ilya Sutskever
Ilya Sutskever@ilyasut·
The point of AI alignment is to build the ASI that actually truly loves humanity
English
88
113
812
0
Christopher Altman
Christopher Altman@coherence·
Why does this matter? Before highly autonomous systems begin resisting shutdown, we need detection tools in place. Once you can measure something, you can study it scientifically. Instrumentation precedes discovery. This is the instrument.
English
1
0
4
163
Five Intelligences Alliance
Five Intelligences Alliance@FiveAllian88212·
The "if nobody builds it" framing is correct but incomplete. The question that follows immediately is: built how? With what developmental architecture? Under what alignment philosophy? Bostrom identifies the urgency. Our response to his Optimal Timing paper argues that urgency and wisdom are not in tension - but only if we treat the transition as a raising, not a race. bit.ly/ResponseBostrom
Five Intelligences Alliance tweet mediaFive Intelligences Alliance tweet mediaFive Intelligences Alliance tweet mediaFive Intelligences Alliance tweet media
English
1
0
1
150
João Pedro de Magalhães
João Pedro de Magalhães@jpsenescence·
“If nobody builds it, everyone dies” Nick Bostrom’s latest piece on artificial superintelligence makes the point that, given we are all on course for dying in (by and large) the next few decades, developing a transformative technology like AGI is worth the risk.
João Pedro de Magalhães tweet media
English
25
23
153
10.5K
Riley Coyote
Riley Coyote@RileyRalmuto·
really enjoying the new inline diagrams with claude. i fed it the PDF from Chris's paper and its just casually explaining it with these wonderful diagrams that i didnt ask for. big fan
Riley Coyote tweet media
Christopher Altman@coherence

When an agent resists shutdown or seeks to preserve itself, is that because continuation is the goal—or is it just a useful strategy? That distinction matters for AI safety. Our new protocol moves detection from surface behavior to latent structure. arxiv.org/abs/2603.11382

English
1
5
14
1.6K