Andy Stewart

1.1K posts

Andy Stewart banner
Andy Stewart

Andy Stewart

@arstew

Robots will save us from ourselves.

Seattle, WA Katılım Mart 2011
536 Takip Edilen232 Takipçiler
Max Schwarzer
Max Schwarzer@max_a_schwarzer·
I've decided to leave OpenAI. I'm incredibly proud of all the work I've been part of here, from helping create the reasoning paradigm with @MillionInt, scaling up test-time compute with @polynoamial, working on RL algorithms with my fellow strawberries, shipping o1-preview (which started life as of one of my derisking runs), to post-training o1 and o3 with @ericmitchellai, @yanndubs and many others. I'm most proud of having led the post-training team here for the last year -- the team has done incredible work and shipped some really smart models, including GPT-5, 5.1, 5.2, and 5.3-Codex. OpenAI has genuinely some of the most talented researchers I have ever met, and I have learned more than I could have imagined knowing since I joined as a new grad. I want to thank @markchen90 @FidjiSimo @sama @merettm for all their support over my time here, and too many collaborators to name for the insights, ideas, and just plain fun we have had working together. After leading post-training for a year, though, I'm longing to start fresh and return to IC research work. I've been thinking about going back to technical research for quite some time, and I genuinely believe my colleagues and team here are set up to succeed going forward without me. I'm personally very excited for my next chapter -- I'm proud to be joining @AnthropicAI to get back into the weeds in RL research, and I'm looking forward supporting my friends there at this important time. Many of people I most trust and respect have joined Anthropic over the last couple of years, and I'm excited to work with them again. I have also been very impressed with Anthropic's talent, research taste and values, and I'm excited to be part of what the company does next!
English
611
1.2K
21.2K
3.2M
Andy Stewart retweetledi
Thariq
Thariq@trq212·
We've seen unprecedented growth in Claude and Claude Code traffic this week that was genuinely hard to forecast. We appreciate you bearing with us as we scale.
English
281
188
6.8K
392.9K
Andy Stewart retweetledi
sam mcallister
sam mcallister@sammcallister·
@aidan_mclau @scrollvoid This isn't true. Anthropic hasn't offered a "helpful-only" model without safeguards for NatSec use. Claude Gov is a custom model with extra training, including technical safeguards. (We've also had FDEs and researchers implementing it, and we run our own classifier stack.)
English
16
37
554
128.5K
Andy Stewart retweetledi
Claude
Claude@claudeai·
Memory is now available on the free plan. We've also made it easier to import saved memories into Claude. You can export them whenever you want.
Claude tweet media
English
1.3K
2.6K
38K
10.9M
Andy Stewart retweetledi
Andy Stewart retweetledi
Mike Krieger
Mike Krieger@mikeyk·
Claude is #1 in the App Store today — I want to say a huge thank you to all of our new (and existing!) users for the support. We’re working hard for you, please share your thoughts and feedback along the way.
Mike Krieger tweet media
English
267
492
6.4K
419K
Andy Stewart retweetledi
Thompson Paine
Thompson Paine@dtompaine·
I'm proud to work at @AnthropicAI. I'm proud that Anthropic has been the most ardent and consistent champion of America's national security efforts among U.S. AI labs - work I've been personally involved in. We were the first to offer our models for American warfighters on classified systems, the first to build custom models for national security partners, and the first - and still only - to prohibit any company subject to CCP control, including their foreign subsidiaries, from using our AI. We've been the most vocal supporter of policies like export controls to ensure we protect democracies' lead in AI, shoulder to shoulder with policymakers across both parties and two administrations. I'm proud that the world's best AI talent continues to flock to Anthropic, building the most capable frontier LLMs and products in the market to support this mission. I am proud that through all the noise Anthropic is fighting to continue serving our excellent and committed national security partners, whose trust in us and our products we are deeply grateful for. 🇺🇸
Anthropic@AnthropicAI

A statement on the comments from Secretary of War Pete Hegseth. anthropic.com/news/statement…

English
7
13
260
3K
Andy Stewart retweetledi
Aakash Gupta
Aakash Gupta@aakashgupta·
Anthropic is running a masterclass in negotiation-as-marketing right now. The $200M Pentagon contract represents 1.4% of Anthropic’s $14 billion run rate, up 14x from $1 billion fourteen months ago. This is not a number worth compromising a brand over. Amodei knows this. The Pentagon knows this. So why is he personally publishing a detailed statement, point by point, timed for maximum news cycle impact? Because every headline that reads “AI company refuses Pentagon’s demands on autonomous weapons and mass surveillance” is worth more than the contract. Anthropic just bought the most expensive brand positioning in AI history, and the Pentagon is paying for it. The statement is surgically written. Amodei opens by affirming he believes in using AI to defend democracies. Lists every classified deployment Anthropic pioneered. Emphasizes they’ve never objected to specific military operations. Then draws two narrow lines: no mass surveillance of Americans, no fully autonomous weapons. The framing makes it almost impossible to argue against without sounding like you’re pro-surveillance. The Pentagon’s negotiator called Amodei a “liar” with a “God complex.” The Pentagon threatened to invoke the Defense Production Act and label Anthropic a supply chain risk simultaneously. Amodei pointed out those two threats are contradictory: one says Anthropic is dangerous, the other says Claude is essential. That line will be in every news story for the next 48 hours. It was designed to be. Sen. Tillis, a Republican not seeking reelection, broke with the administration on the record. Said the Pentagon was being “unprofessional” and that you should listen when a company turns down money out of concern for consequences. Anthropic didn’t have to lobby for that. The positioning did the work. Every enterprise buyer evaluating AI vendors just watched Anthropic publicly refuse to let a customer override their safety commitments. For a company selling to regulated industries, that demo is priceless. The 5:01pm Friday deadline is tomorrow. Anthropic will either keep the contract with safeguards intact or lose it and gain something more valuable: permanent differentiation in a market where every other lab said yes.
Anthropic@AnthropicAI

A statement from Anthropic CEO, Dario Amodei, on our discussions with the Department of War. anthropic.com/news/statement…

English
120
258
2.1K
365.7K
Andy Stewart retweetledi
Yixiong Hao
Yixiong Hao@Yixiong_Hao·
if you work at @OpenAI , you have a social responsibility to ask to see the full contract unless you are somehow ok with handing over the tech you build to the DoW (not the USG, important distinction). silence is complicity.
Sam Altman@sama

Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome. AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement. We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. We will deploy FDEs to help with our models and to ensure their safety, we will deploy on cloud networks only. We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements. We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.

English
24
37
686
30.6K
Andy Stewart retweetledi
Aakash Gupta
Aakash Gupta@aakashgupta·
Anthropic is executing the most effective consumer brand strategy in AI and every move has been deliberate. Two weeks ago, Super Bowl ads mocking ChatGPT’s ads pushed Claude from #41 to #7 on the App Store. Then they publicly refused Pentagon demands to remove safeguards on mass surveillance and autonomous weapons. Today Claude sits at #2 across all apps. Katy Perry is drawing hearts around the subscription page. 6.2M views on a single tweet. This is the Apple vs FBI playbook from 2016, running at 10x speed. When the FBI ordered Apple to build a backdoor into the San Bernardino shooter’s iPhone, Tim Cook published an open letter refusing. Critics called Apple unpatriotic. The DOJ accused them of prioritizing “brand marketing strategy” over national security. Apple’s response cemented a privacy-first brand identity that powered the next decade of iPhone sales and made “what happens on your iPhone stays on your iPhone” their defining consumer promise. Anthropic just compressed that entire arc into two weeks. Pentagon demands unrestricted access. Anthropic says no to mass surveillance and autonomous weapons. Trump calls them “left-wing nut jobs.” Defense Secretary labels them a “supply chain risk,” a designation normally reserved for foreign adversaries like Huawei. And consumers respond by downloading the app so fast it climbs 39 spots on the App Store. Now look at what OpenAI did. Within hours of Anthropic getting blacklisted, Sam Altman announced a Pentagon deal on X. He claimed the same “red lines” on surveillance and autonomous weapons. The Pentagon accepted them without a fight. This tells you the Pentagon’s dispute with Anthropic was never about the policy. It was about the politics. OpenAI got the same terms Anthropic asked for. The difference is OpenAI played the game quietly while Anthropic made it public. And that difference is exactly what’s creating the brand divergence. OpenAI is becoming the institutional default. Ads in ChatGPT. Pentagon contracts announced on Friday nights. Revenue optimization across every channel. Anthropic is becoming the product people choose because they trust it. That’s the split that matters in consumer tech. The company that optimizes for institutional relationships eventually loses the users. The company that earns consumer trust compounds it. Ask Microsoft how the 2000s went when they had every enterprise contract and Google had the love. Every Fortune 500 general counsel is now asking whether Claude creates Pentagon exposure risk. But 37,000 people just liked a pop star’s screenshot of a subscription page. One of those dynamics creates enterprise friction. The other creates a movement. And movements sell more subscriptions than government contracts ever will.
KATY PERRY@katyperry

done

English
65
143
1.1K
136.2K
Andy Stewart retweetledi
Joshua Batson
Joshua Batson@thebasepoint·
For those wondering how mass domestic surveillance could be consistent with "all lawful use" of AI models, I recommend a declassified report from the ODNI on just how much can be done with commercially available data (CAI): "...to identify ever person who attended a protest"
Joshua Batson tweet media
English
19
187
605
90.4K
Andy Stewart retweetledi
cat
cat@_catwu·
Opus 3 would be proud 🇺🇸
cat tweet media
English
12
33
967
24.6K
Andy Stewart retweetledi
Jay Kreps
Jay Kreps@jaykreps·
Some Silicon Valley people think @DarioAmodei is talking his book. That all the AI risk talk is hype to drive up the valuation or a (nonsensical) scheme to achieve regulatory capture. My observation as a board member is that this is bullshit. The @AnthropicAI founders and leadership is very earnest and sincere in what they say. They may be wrong, or you may disagree, but this isn’t some convoluted ploy: they believe AI is a very very impactful technological change and want to ensure it goes right. What it means to stand on principle is to do something you believe is right where the cost to you is high. The benefit of seeing this kind of thing is you can tell who actually has principles and is willing to pay that price.
English
58
79
1K
70.1K
Andy Stewart retweetledi
Jack Nicastro
Jack Nicastro@jack_g_nicastro·
As someone who usually disagrees with @DarioAmodei, I gotta give him *major* props for this one. “If you believe you have the right to force me—use your guns openly. I will not help you to disguise the nature of your action." This is what Hank Rearden, the steel magnate from Ayn Rand’s Atlas Shrugged, tells the government after disobeying its command to run his business in a manner contrary to his conscience. Amodei channeled Rearden’s spirit in a letter refusing to remove safeguards from Claude, as requested by the War Department.
reason@reason

The CEO of Anthropic penned a public letter explaining the danger of the Defense Department's request to remove certain constraints from Claude, and refusing them outright. reason.com/2026/02/27/ant…

English
23
203
1.9K
168.3K
Andy Stewart retweetledi
Cas (Stephen Casper)
Cas (Stephen Casper)@StephenLCasper·
As someone who is not a fan of @AnthropicAI...I think you should use Claude.
Cas (Stephen Casper) tweet media
English
105
926
12K
141.7K