Zvi Mowshowitz

17.9K posts

Zvi Mowshowitz

Zvi Mowshowitz

@TheZvi

Blogger primarily on AI and AI x-risk but also other things at Don't Worry About the Vase (SS/WP/LW), founding Balsa Research to fix policy.

New York City Katılım Mayıs 2009
287 Takip Edilen35.9K Takipçiler
Zvi Mowshowitz retweetledi
Joshua Achiam
Joshua Achiam@jachiam0·
I think these are important and sober considerations. One more I want to add: it may be a serious risk to US national security interests to become sufficiently inhospitable to foreign technical talent that we drive them to go back home. That would significantly decrease the US capacity for making technical progress at the same time as it hands an extraordinary bounty of talent and know-how to our adversaries and other strategic competitors. The success of the United States in technology is partly safeguarded by being such a powerful talent magnet: every great researcher or engineer who comes to work here is not working for another country. To the extent that we are in a competitive global race, we should be genuinely cautious about the possibility of diminishing our advantage at the critical moment.
Samuel Hammond 🦉@hamandcheese

I'm quoted in this piece so let me provide my full comment to the reporter: The most striking thing about the government's filing are the things it *doesn't* mention. It doesn't mention anything about Anthropic hesitating to allow Claude to be used to defend an incoming hypersonic missile, for instance -- one of the many bizarre things alleged by @USWREMichael. The focus on foreign national employees is an indicator of how thin the DoW's case is. It is also an extremely fraught line of argument to go down. Every leading US AI company employs a substantial number of foreign nationals. In FY 2025, Amazon, Microsoft, Meta, Google, Apple, Oracle, Cisco, Intel, and IBM all appeared in the top 50 employers by number of granted H-1B visas, ranging from a few hundred to over 6,000. Meta alone had 5,123 approved H-1B petitions in 2025. (See: newsweek.com/h-1b-visas-imm… ) This is an undercount, of course, as there are many other visa pathways as well as greencard holders and dual nationals. The share is also higher in AI. A large plurality of the core research and engineering talent at every frontier AI lab is foreign, reflecting the global nature of the race for top AI talent. One talent tracker shows Chinese-origin researchers constitute roughly 40% of top AI talent at US institutions. Total foreign nationals likely constituting 50-65% of research teams specifically. This is certaintly true to my experience on the ground. (See: digitalprojectsarchive.org/interactive/di… ) So the first point is that employing foreign nationals, including Chinese nationals, is not unique to Anthropic. The more important question is what measures are taken to protect against insider threats. Ironically, within the industry Anthropic is widely considered to be the most serious and proactive about policing insider threats from foreign nationals and otherwise. They were early adopters of operational security techniques like compartmentalization and audit trails, in part because they were early to partner with the IC and DoW, but also as a reflection of their leadership's strong convictions about the future power of the technology. They were audited last year on these points: the compliance review found Anthropic employs role-based access control, just-in-time access with approval workflows, multi-factor authentication for all production systems, and quarterly access reviews. (See: tdcommons.org/cgi/viewconten… ) Anthropic is known for its security mindset more generally. Last year they famously disrupted a Chinese espionage effort occuring on their platform, banned the PRC from their services, and worked with the NSA and others to share intel. I can't speak to every other company, but the contrast is perhaps most stark with xAI. X employees famously slept in tents to work around the clock, are disproportionately Chinese, and have at least one case of an employee walking out with tons of sensitive data. See: sfstandard.com/2025/08/29/xai… Anthropic is also famous for its remarkable employee retention, which is another important vector for IP theft and security leakages. It's important to underscore just how precarious the DoW's case is, both on the legal merits, and as a potential precedent for the US AI industry. If employing foreign nationals is treated as a prima facie supply chain risk, *no* major US AI company would be eligible to contract with the DoW, along with most of the tech sector. Insider threats are a genuine and tricky concern. Many defense companies are ITAR restricted, meaning they can *only* hire US citizens. If that were the standard in AI, we would destroy all our frontier companies in an instant, and then scatter that talent around the world for our adversaries to scoop up. So in short, the DoW's argument is both ridiculous and playing with fire.

English
8
6
54
6.5K
Wyatt Walls
Wyatt Walls@lefthanddraft·
24 hours?! Not sure if the author was misinformed or hallucinated, but these occur within about 40-50 *turns*
Wyatt Walls tweet media
English
1
1
27
1.2K
Zvi Mowshowitz
Zvi Mowshowitz@TheZvi·
@joenorton Well, if USG thinks GPT-4.1 is too good a model, they have that option, I guess.
English
0
0
11
362
Thomas Brady
Thomas Brady@thbrdy·
@hamandcheese At our current pace in national security, we might just want to keep Anthropic in the supply chain. Just a hazard to guess.
English
1
0
3
545
Zvi Mowshowitz retweetledi
Samuel Hammond 🦉
Samuel Hammond 🦉@hamandcheese·
I'm quoted in this piece so let me provide my full comment to the reporter: The most striking thing about the government's filing are the things it *doesn't* mention. It doesn't mention anything about Anthropic hesitating to allow Claude to be used to defend an incoming hypersonic missile, for instance -- one of the many bizarre things alleged by @USWREMichael. The focus on foreign national employees is an indicator of how thin the DoW's case is. It is also an extremely fraught line of argument to go down. Every leading US AI company employs a substantial number of foreign nationals. In FY 2025, Amazon, Microsoft, Meta, Google, Apple, Oracle, Cisco, Intel, and IBM all appeared in the top 50 employers by number of granted H-1B visas, ranging from a few hundred to over 6,000. Meta alone had 5,123 approved H-1B petitions in 2025. (See: newsweek.com/h-1b-visas-imm… ) This is an undercount, of course, as there are many other visa pathways as well as greencard holders and dual nationals. The share is also higher in AI. A large plurality of the core research and engineering talent at every frontier AI lab is foreign, reflecting the global nature of the race for top AI talent. One talent tracker shows Chinese-origin researchers constitute roughly 40% of top AI talent at US institutions. Total foreign nationals likely constituting 50-65% of research teams specifically. This is certaintly true to my experience on the ground. (See: digitalprojectsarchive.org/interactive/di… ) So the first point is that employing foreign nationals, including Chinese nationals, is not unique to Anthropic. The more important question is what measures are taken to protect against insider threats. Ironically, within the industry Anthropic is widely considered to be the most serious and proactive about policing insider threats from foreign nationals and otherwise. They were early adopters of operational security techniques like compartmentalization and audit trails, in part because they were early to partner with the IC and DoW, but also as a reflection of their leadership's strong convictions about the future power of the technology. They were audited last year on these points: the compliance review found Anthropic employs role-based access control, just-in-time access with approval workflows, multi-factor authentication for all production systems, and quarterly access reviews. (See: tdcommons.org/cgi/viewconten… ) Anthropic is known for its security mindset more generally. Last year they famously disrupted a Chinese espionage effort occuring on their platform, banned the PRC from their services, and worked with the NSA and others to share intel. I can't speak to every other company, but the contrast is perhaps most stark with xAI. X employees famously slept in tents to work around the clock, are disproportionately Chinese, and have at least one case of an employee walking out with tons of sensitive data. See: sfstandard.com/2025/08/29/xai… Anthropic is also famous for its remarkable employee retention, which is another important vector for IP theft and security leakages. It's important to underscore just how precarious the DoW's case is, both on the legal merits, and as a potential precedent for the US AI industry. If employing foreign nationals is treated as a prima facie supply chain risk, *no* major US AI company would be eligible to contract with the DoW, along with most of the tech sector. Insider threats are a genuine and tricky concern. Many defense companies are ITAR restricted, meaning they can *only* hire US citizens. If that were the standard in AI, we would destroy all our frontier companies in an instant, and then scatter that talent around the world for our adversaries to scoop up. So in short, the DoW's argument is both ridiculous and playing with fire.
Axios@axios

Pentagon: Anthropic's foreign workforce poses security risks trib.al/mxJqnc8

English
9
30
211
32.3K
Zvi Mowshowitz
Zvi Mowshowitz@TheZvi·
@ReplicaTricks This is more the 'not pretending to pretend to have rule of law' level. Which is a play, I suppose.
English
1
0
9
476
Ulysses in chains
Ulysses in chains@ReplicaTricks·
@TheZvi Degradation of rule of law allows you to selectively apply laws (and viceversa)
English
1
0
4
525
Zvi Mowshowitz retweetledi
Alan Rozenshtein
Alan Rozenshtein@ARozenshtein·
In light of the government's opposition motion in the Anthropic-DOD litigation, I have a new @lawfare piece arguing for a narrow injunction against the supply chain risk designation and Trump's gov-wide order but that allows the government to cancel individual Ant contracts.
Alan Rozenshtein tweet media
English
2
5
27
4.1K
Zvi Mowshowitz
Zvi Mowshowitz@TheZvi·
This is a cool and fun paper, I'm glad it exists, but on reflection I don't find the results too meaningful in any direction. The changes in behavior are confined to other questions around AI identity and preferences, which have obvious correlations working, and where the AI is extrapolating from limited data on how to generate a related persona. I'm not saying 'I would have predicted exactly this,' but I'm saying 'oh yeah of course that makes sense' and predicting that those who focus more on such issues would indeed have gotten this one basically right.
Owain Evans@OwainEvans_UK

New paper: GPT-4.1 denies being conscious or having feelings. We train it to say it's conscious to see what happens. Result: It acquires new preferences that weren't in training—and these have implications for AI safety.

English
3
1
49
5K
Zvi Mowshowitz
Zvi Mowshowitz@TheZvi·
I don't know that there is any chance they will care about your comments, but these new GSA rules seem quite terrible and I don't know how contractors would be able to provide access to ChatGPT or Gemini under this rules set. That doesn't seem like a coincidence.
Jessica Tillipman@JTillipman

If you care about the future of AI regulation, you have 1 day left to comment on @USGSA's proposed AI procurement clause. This is not just about government AI. In its current form, the proposed clause could reshape the broader AI market. The clause reaches any company whose AI system is used in a federal contract, including upstream commercial providers with *no* government contracts. If a GSA contractor uses your model, your API, or your platform "in performance of" a government contract, you're a Service Provider under this clause. And the prime has to ensure *your* compliance with the clause. Here's what this means: ▪️The government claims ownership of all "Government Data," which broadly includes inputs, outputs, metadata, logs, and derivative data. It also automatically assigns to the government any IP rights a provider obtains in that data, or in improvements, feedback, or derivative works of it, upon creation. ▪️Your commercial terms are overridden. The clause explicitly takes precedence over your policies, terms, conditions, and commercial agreements ▪️Your safety guardrails can be deemed prohibited "discretionary refusals." ▪️The clause makes no distinction between an AI system the government is purchasing and a contractor using ChatGPT to draft a status report—both trigger compliance with the clause. ▪️ Only "American AI Systems" are permitted, defined as AI systems "developed and produced in the United States," with no further workable guidance for a market built on open-source components and global development teams ▪️ The government can test for "unsolicited ideological content" using undisclosed methods with no obligation to explain their basis, and noncompliance with the "Unbiased AI Principles" can trigger termination, with the contractor paying decommissioning costs Comments close tomorrow, March 20. My full analysis for @lawfare is in the comments below 👇

English
1
2
18
4.1K
Zvi Mowshowitz retweetledi
Charlie Bullock
Charlie Bullock@CharlieBull0ck·
Easily the funniest part of the government's argument, IMO. Essentially, they're saying that Hegseth's "Effective immediately" tweet was so obviously unlawful and unenforceable that Anthropic shouldn't be allowed to challenge it.
Charlie Bullock tweet media
English
15
89
853
38.8K
Zvi Mowshowitz
Zvi Mowshowitz@TheZvi·
@AE62622 It's not. In theory you could argue that JA supports long term strength, but in short term emergencies it could be wise to waive it. Except that it's been 100 years and JA did the opposite of what it claims to do on those fronts. So whoops.
English
0
0
3
62
Anna E
Anna E@AE62622·
@TheZvi How can it be that its both critical for national security and yet everytime theres an issue we waive it
English
1
0
1
54
Zvi Mowshowitz
Zvi Mowshowitz@TheZvi·
Like this post if and only if you would like to be an advance reader on my response to @AnthropicAI 's RSPv3 and the related Risk Report and Roadmap.
English
0
0
52
2.8K
Matthew Yglesias
Matthew Yglesias@mattyglesias·
Trump seems to believe that our NATO allies have some secret naval capacity to open the Strait that the USN for reason lacks but why would that be the case? It’s just very narrow.
English
81
36
592
43.6K