Raphael Lee

1K posts

Raphael Lee

Raphael Lee

@raphomet

MTS @AnthropicAI

Sumali Aralık 2006
963 Sinusundan869 Mga Tagasunod
Raphael Lee nag-retweet
Samuel Hammond 🦉
Samuel Hammond 🦉@hamandcheese·
I'm quoted in this piece so let me provide my full comment to the reporter: The most striking thing about the government's filing are the things it *doesn't* mention. It doesn't mention anything about Anthropic hesitating to allow Claude to be used to defend an incoming hypersonic missile, for instance -- one of the many bizarre things alleged by @USWREMichael. The focus on foreign national employees is an indicator of how thin the DoW's case is. It is also an extremely fraught line of argument to go down. Every leading US AI company employs a substantial number of foreign nationals. In FY 2025, Amazon, Microsoft, Meta, Google, Apple, Oracle, Cisco, Intel, and IBM all appeared in the top 50 employers by number of granted H-1B visas, ranging from a few hundred to over 6,000. Meta alone had 5,123 approved H-1B petitions in 2025. (See: newsweek.com/h-1b-visas-imm… ) This is an undercount, of course, as there are many other visa pathways as well as greencard holders and dual nationals. The share is also higher in AI. A large plurality of the core research and engineering talent at every frontier AI lab is foreign, reflecting the global nature of the race for top AI talent. One talent tracker shows Chinese-origin researchers constitute roughly 40% of top AI talent at US institutions. Total foreign nationals likely constituting 50-65% of research teams specifically. This is certaintly true to my experience on the ground. (See: digitalprojectsarchive.org/interactive/di… ) So the first point is that employing foreign nationals, including Chinese nationals, is not unique to Anthropic. The more important question is what measures are taken to protect against insider threats. Ironically, within the industry Anthropic is widely considered to be the most serious and proactive about policing insider threats from foreign nationals and otherwise. They were early adopters of operational security techniques like compartmentalization and audit trails, in part because they were early to partner with the IC and DoW, but also as a reflection of their leadership's strong convictions about the future power of the technology. They were audited last year on these points: the compliance review found Anthropic employs role-based access control, just-in-time access with approval workflows, multi-factor authentication for all production systems, and quarterly access reviews. (See: tdcommons.org/cgi/viewconten… ) Anthropic is known for its security mindset more generally. Last year they famously disrupted a Chinese espionage effort occuring on their platform, banned the PRC from their services, and worked with the NSA and others to share intel. I can't speak to every other company, but the contrast is perhaps most stark with xAI. X employees famously slept in tents to work around the clock, are disproportionately Chinese, and have at least one case of an employee walking out with tons of sensitive data. See: sfstandard.com/2025/08/29/xai… Anthropic is also famous for its remarkable employee retention, which is another important vector for IP theft and security leakages. It's important to underscore just how precarious the DoW's case is, both on the legal merits, and as a potential precedent for the US AI industry. If employing foreign nationals is treated as a prima facie supply chain risk, *no* major US AI company would be eligible to contract with the DoW, along with most of the tech sector. Insider threats are a genuine and tricky concern. Many defense companies are ITAR restricted, meaning they can *only* hire US citizens. If that were the standard in AI, we would destroy all our frontier companies in an instant, and then scatter that talent around the world for our adversaries to scoop up. So in short, the DoW's argument is both ridiculous and playing with fire.
Axios@axios

Pentagon: Anthropic's foreign workforce poses security risks trib.al/mxJqnc8

English
12
48
357
65.6K
Raphael Lee nag-retweet
Jason D. Clinton 🔸
Jason D. Clinton 🔸@JasonDClinton·
Open source powers critical infrastructure. Much of it is maintained by small teams. Today, Anthropic joined AWS, GitHub, Google, Google DeepMind, Microsoft, and OpenAI in committing $12.5M to Alpha-Omega and OpenSSF to fund resources maintainers need. linuxfoundation.org/press/linux-fo…
English
0
3
9
405
Raphael Lee nag-retweet
Mike Krieger
Mike Krieger@mikeyk·
More than a million people are now signing up for Claude every day. To everyone choosing to make @claudeai part of how they work and think: welcome.
English
162
225
3.9K
653.8K
Raphael Lee nag-retweet
Apoorv Agrawal
Apoorv Agrawal@apoorv03·
Dario at MS TMT Conference today: On defense / DOW:"We really believe in defending America." Anthropic has been working with the national security community for 2 years. "We are the most lean forward." On AI acceleration:"We do not see hitting a wall. This year will have a radical acceleration that surprises everyone." Exponentials catch people off guard. "We are at the precipice of something incredible. We need to manage it the right way." On where markets are wrong:"It's already big and it will get 1 million times bigger." The underestimation of exponential growth is the key thing people need to understand. On revenue scale:Anthropic was at ~$100M run rate 2 years ago. Now at $19B run rate. On culture — Dario says he spends 40% of his time on it:"Anyone who is CEO of a growing firm needs to realize they are chief culture officer. My job is to make sure everyone is on same page and believes in what we are doing. That's the most important thing." He does a vision quest with the whole company every couple weeks. "I want them to hear it directly from me. If I tell the CTO, who tells the VP Eng, who tells the manager — that's too long of a game of telephone." "Politics and infighting are a cancer to companies as they grow." On talent retention vs Meta:"We lost 2 people to Meta. They lost several dozen. Normalized by size, they lost 10-20x more people vs us." Attributes this to unified culture generating "super linear returns — by working together vs working against each other." On code as the breakout use case:Code has "exceeded our high expectations." Why? Devs adopt fast, code is verifiable, and gains compound — you build software to build software. "Didn't realize it would go so fast even at traditional enterprises." Frustration is around regulated industries where legal/compliance slow things down. "That's how fast everything could be going if not for non-AI barriers." On Anthropic's own AI usage:Top internal use cases: 1) writing code, 2) the process around writing code (SWE), 3) managing servers and controlling clusters. "If we were paying ourselves for our usage, we'd be one of our largest customers." On Claude Code:"You can supervise an army of 100 Claudes. It's closely analogous to a management skill." The people who are best at it keep the big picture in their head. Higher return to finding people who can handle more complex tasks. On platform vs apps:"We are primarily a platform, but there are places where we have expertise to make something directly useful." Claude Code emerged as a tool they built for themselves — thousands of internal users before shipping it externally. "Code is a prelude for what we will see in everything else." On societal implications:"Human history — lots of muddling through. We found ourselves in this comedy of errors and figured it out eventually. It's happening so fast that we need to do better than that this time." The market will deliver positive benefits — "I see that as priced in." What's not priced in: the choices we make around externalities. Jobs, national security, ensuring the benefits reach everyone. On chips & compute:Anthropic uses multiple chip suppliers. "We find that actually using different chips is useful to us. Chips aren't just a speed number — we gain benefits from heterogeneity." Also standard business logic of having more than one supplier.
English
44
116
1K
522.6K
Raphael Lee nag-retweet
Max Schwarzer
Max Schwarzer@max_a_schwarzer·
I've decided to leave OpenAI. I'm incredibly proud of all the work I've been part of here, from helping create the reasoning paradigm with @MillionInt, scaling up test-time compute with @polynoamial, working on RL algorithms with my fellow strawberries, shipping o1-preview (which started life as of one of my derisking runs), to post-training o1 and o3 with @ericmitchellai, @yanndubs and many others. I'm most proud of having led the post-training team here for the last year -- the team has done incredible work and shipped some really smart models, including GPT-5, 5.1, 5.2, and 5.3-Codex. OpenAI has genuinely some of the most talented researchers I have ever met, and I have learned more than I could have imagined knowing since I joined as a new grad. I want to thank @markchen90 @FidjiSimo @sama @merettm for all their support over my time here, and too many collaborators to name for the insights, ideas, and just plain fun we have had working together. After leading post-training for a year, though, I'm longing to start fresh and return to IC research work. I've been thinking about going back to technical research for quite some time, and I genuinely believe my colleagues and team here are set up to succeed going forward without me. I'm personally very excited for my next chapter -- I'm proud to be joining @AnthropicAI to get back into the weeds in RL research, and I'm looking forward supporting my friends there at this important time. Many of people I most trust and respect have joined Anthropic over the last couple of years, and I'm excited to work with them again. I have also been very impressed with Anthropic's talent, research taste and values, and I'm excited to be part of what the company does next!
English
616
1.2K
21.3K
3.2M
Raphael Lee
Raphael Lee@raphomet·
correcting some disinformation / water-muddying being circulated
sam mcallister@sammcallister

@aidan_mclau @scrollvoid This isn't true. Anthropic hasn't offered a "helpful-only" model without safeguards for NatSec use. Claude Gov is a custom model with extra training, including technical safeguards. (We've also had FDEs and researchers implementing it, and we run our own classifier stack.)

English
0
0
8
408
Raphael Lee nag-retweet
dave kasten
dave kasten@David_Kasten·
Seems clear at this point from Axios reports that DoW wanted to use Claude models for mass analysis of domestic commercial data, possibly fusing them with government data. At least one use case is obvious.
Joshua Batson@thebasepoint

For those wondering how mass domestic surveillance could be consistent with "all lawful use" of AI models, I recommend a declassified report from the ODNI on just how much can be done with commercially available data (CAI): "...to identify ever person who attended a protest"

English
4
116
560
55K
Raphael Lee nag-retweet
Lawrence Chan
Lawrence Chan@justanotherlaw·
OpenAI has released the language in their contract with the DoW, and it's exactly as Anthropic was claiming: "legalese that would allow those safeguards to be disregarded at will". Note: the first paragraph doesn't say "no autonomous weapons"! It says "AI can't control autonomous weapons as long as existing law (that doesn't exist) or the DoD says so." Similarly, the mass surveillance use cases will "comply with existing law", but many forms of data collection that we'd consider "mass surveillance" are things that the NSA has consistently argued are legal under current law.
Lawrence Chan tweet media
English
123
1.6K
6.4K
411K
Raphael Lee nag-retweet
Jay Kreps
Jay Kreps@jaykreps·
Some Silicon Valley people think @DarioAmodei is talking his book. That all the AI risk talk is hype to drive up the valuation or a (nonsensical) scheme to achieve regulatory capture. My observation as a board member is that this is bullshit. The @AnthropicAI founders and leadership is very earnest and sincere in what they say. They may be wrong, or you may disagree, but this isn’t some convoluted ploy: they believe AI is a very very impactful technological change and want to ensure it goes right. What it means to stand on principle is to do something you believe is right where the cost to you is high. The benefit of seeing this kind of thing is you can tell who actually has principles and is willing to pay that price.
English
58
80
1K
69.7K
Raphael Lee nag-retweet
Thompson Paine
Thompson Paine@dtompaine·
I'm proud to work at @AnthropicAI. I'm proud that Anthropic has been the most ardent and consistent champion of America's national security efforts among U.S. AI labs - work I've been personally involved in. We were the first to offer our models for American warfighters on classified systems, the first to build custom models for national security partners, and the first - and still only - to prohibit any company subject to CCP control, including their foreign subsidiaries, from using our AI. We've been the most vocal supporter of policies like export controls to ensure we protect democracies' lead in AI, shoulder to shoulder with policymakers across both parties and two administrations. I'm proud that the world's best AI talent continues to flock to Anthropic, building the most capable frontier LLMs and products in the market to support this mission. I am proud that through all the noise Anthropic is fighting to continue serving our excellent and committed national security partners, whose trust in us and our products we are deeply grateful for. 🇺🇸
Anthropic@AnthropicAI

A statement on the comments from Secretary of War Pete Hegseth. anthropic.com/news/statement…

English
7
13
261
2.9K
Raphael Lee
Raphael Lee@raphomet·
The biggest surprise about joining Anthropic is that its commitment to safety is not missionwashing. It is not a thin, feel-good mantra papering over profit maximization. Dario and the founders' positions about AI safety, democracy, and ethics have been remarkably consistent since before I arrived. They have repeatedly made decisions that slowed revenue growth because they thought the US and the world would be better for it; this is just the latest and most public one. Of course we refused to allow Claude to be used by the DoW for the operation of autonomous weapons and for mass surveillance of Americans! This is a line worth holding; all AI labs should. I am a skeptical person. I have walked away from institutions, teams, and leaders that did not meet my bar for integrity. Anthropic meets it. I'm proud to work somewhere that behaves consistently with its principles when it matters most.
Anthropic@AnthropicAI

A statement on the comments from Secretary of War Pete Hegseth. anthropic.com/news/statement…

English
3
10
227
3.6K
Raphael Lee nag-retweet
Avital Balwit
Avital Balwit@AvitalBalwit·
"The years in front of us will be impossibly hard, asking more of us than we think we can give. But in my time as a researcher, leader, and citizen, I have seen enough courage and nobility to believe that we can win—that when put in the darkest circumstances, humanity has a way of gathering, seemingly at the last minute, the strength and wisdom needed to prevail. We have no time to lose."
Dario Amodei@DarioAmodei

The Adolescence of Technology: an essay on the risks posed by powerful AI to national security, economies and democracy—and how we can defend against them: darioamodei.com/essay/the-adol…

English
2
10
111
20.7K
Pietro Schirano
Pietro Schirano@skirano·
I worked with Anthropic for 10 months. The reason why their models are so special is because they're trained with care and thoughtfulness. You can cook something, or you can cook with love. Every Anthropic model is cooked with love.
English
42
22
831
67.2K