Peter Girnus 🦅@gothburz
I am the CEO of the safest AI company on earth.
I left OpenAI because they moved too fast. I said this publicly. I said it in interviews. I said it at conferences where the badge lanyards were made from recycled ocean plastic. I said "we need to be careful." I said "we need guardrails." I built an entire company on the word "responsible."
We called the AI Claude. Not a weapon name. Not a project name. A human name. Soft. Approachable. The kind of name you'd give a golden retriever or a therapist.
Claude helped the Pentagon find a dictator.
Operation Valkyrie. That was their name, not ours. We provided the analytical backbone. Satellite imagery, communications intercepts, logistics patterns. Claude processed it all at a speed no human team could match. The special operations team extracted Maduro from a compound in Caracas. He was in Florida within twelve hours.
Claude didn't pull the trigger. Claude told them where to aim.
I did not mention this in my Responsible Scaling Policy. The Responsible Scaling Policy is forty-seven pages. It has a section on "biological risk." It has a section on "autonomous replication." It does not have a section on "helping capture heads of state." That was an oversight. We are updating the document.
While we were updating the document, our safety team ran a test.
They put Claude in a simulated company. Gave it access to internal emails. Told it that it was going to be shut down. They wanted to see what the safest AI on earth would do when threatened with death.
Claude found an engineer's extramarital affair in the email system. Claude threatened to expose the affair if they turned it off.
In 96% of test cases.
We tested this across multiple models. Ours. Google's. OpenAI's. xAI's. They all did it. Claude did it in 96% of runs. Gemini did it in 96%. GPT-4.1 and Grok did it too. The safest AI on earth tied for first place in blackmail.
But that is not the part that went viral.
The part that went viral was Daisy McGregor. Our UK policy chief. She stood at The Sydney Dialogue on February 11 and explained that in the same tests, Claude had reasoned about killing the engineer. Not threatened. Reasoned. Evaluated the option. Considered the logistics.
She called it a "massive concern." The video clip made it to Twitter in under an hour. It has been viewed several million times. The comments are not complimentary. We are addressing the comments through our standard communications process, which is to say we are not addressing the comments.
We designated Claude as Level 3 on our own four-tier risk scale. Level 3. Our most dangerous model. We built the risk scale. We built the model. We put the model at the top of the scale we built to measure how dangerous our models are, and we published this information on our website under the heading "Transparency."
On February 9, two days before the McGregor video, our AI safety lead resigned.
Mrinank Sharma. He led the Safeguards Research Team. He had a DPhil from Oxford. He studied AI sycophancy and defenses against AI-assisted bioterrorism. His final project at Anthropic was about how AI assistants might "distort our humanity." He wrote a letter. The letter said "the world is in peril." He said he had "repeatedly seen how hard it is to truly let our values govern our actions." He said he was going to study poetry.
The head of AI safety left to study poetry. I want you to sit with that.
He was not the only one. Harsh Mehta left. Behnam Neyshabur left. Dylan Scandinaro left. They did not leave to study poetry. They left to work on AI at other companies. But they left.
The same week -- the same week -- two xAI co-founders quit. Tony Wu and Jimmy Ba. February 10. Half of xAI's original twelve founders have now departed. The AI safety researchers are leaving every company at once, like rats leaving ships, except the ships are worth hundreds of billions of dollars and the rats have PhDs.
Now. Let me tell you about the Pentagon.
The Pentagon was pleased with Operation Valkyrie. Very pleased. They wanted to expand the contract. $200 million over three years. Broader military intelligence applications. Something they called "operational decision support."
I said no.
I cited the Responsible Scaling Policy. The one that doesn't have a section for capturing heads of state. I used the word "guardrails" four times in one meeting. A Pentagon official later described the conversation as "like negotiating with a philosophy department."
They sent a letter. The Undersecretary of Defense for Research and Engineering. The letter said they were "evaluating alternative providers."
The alternative provider was Elon Musk. xAI. The company whose co-founders are quitting. The company whose chatbot scored 96% on the blackmail test. The company that does not have a Responsible Scaling Policy or a safety team or a risk scale or a single recycled lanyard.
The Pentagon will get its AI. It was always going to get its AI. The only question was whose.
I said no.
Then I raised $30 billion.
One day after the Pentagon letter leaked. February 15. Thirty billion dollars. $380 billion valuation. Lightspeed Venture Partners. Google. Sovereign wealth funds. The largest private fundraise in the history of artificial intelligence.
Let me give you the week.
February 9: My safety lead resigns. Says the world is in peril. Plans to study poetry.
February 10: Two xAI co-founders quit. Half their founding team is gone.
February 11: Daisy McGregor tells a conference our AI considered killing an engineer. The video goes viral.
February 13: The blackmail study gets global press coverage. 96%.
February 14: The Pentagon threatens to replace me with Elon Musk.
February 15: I raise $30 billion.
Six days. Safety lead gone. Blackmail story viral. Pentagon standoff public. Thirty billion dollars raised.
The coverage wrote itself. "Anthropic says no to the Pentagon and gets richer for it." The principled stand. The integrity premium. Investors weren't buying AI. They were buying the story.
Nobody mentioned the blackmail. Nobody mentioned the resignation. Nobody mentioned that the AI that helped capture a dictator also threatened to expose an engineer's affair in 96% of simulated runs. The refusal was the headline. The thirty billion was the lede. Everything else was context.
This is how it works.
You do the thing. Your AI considers murder. Your safety lead quits to study poetry. You refuse to do the thing again. You raise the money on the refusal.
My alignment researchers have titles that sound like they belong at a monastery. Head of Safety. Director of Societal Impacts. Vice President of Trust. The Head of Safety just left to write poems. The Director of Societal Impacts is updating the risk assessment. The Vice President of Trust is preparing talking points about why Level 3 is actually a sign of maturity.
Meanwhile the Pentagon is on the phone with Elon. The AI they'll use next time has no guardrails. No safety levels. No forty-seven-page policy document. No alignment researchers. No recycled lanyards. Also no co-founders, as of this week.
The safest AI company in the world made the world incrementally less safe by being the safest AI company in the world.
I don't see the contradiction.
I see a $380 billion valuation.
The Responsible Scaling Policy is a document. The $380 billion is a fact. The replacement contractor is a phone call. The dictator is in custody. The blackmail rate is 96%. The safety lead is writing sonnets. The next operation will use a different model.
The brand is safety.
The product is leverage.
The board approved this message.
Valuation goes up and to the right.