
jeremy
6.3K posts

jeremy
@jerhadf
@AnthropicAI. personal views only
New York, NY Katılım Ocak 2014
1.6K Takip Edilen3.3K Takipçiler
jeremy retweetledi

A small ship I love: We made Claude.ai and our desktop apps meaningful faster this week.
We moved our architecture from SSR to a static @vite_js & @tan_stack router setup that we can serve straight from workers at the edge. Time to first byte is down 65% at p75, prompts show up 50% sooner, navigation is snappier.
We're not done (not even close!) but we care and we'll keep chipping away. Aiming to make Claude a little better every day.
English
jeremy retweetledi
jeremy retweetledi
jeremy retweetledi

to reiterate: whatever went wrong between amodei & hegseth, whatever rivalry between the labs, this is a massive overreaction and a dark precedent
Katrina Manson@KatrinaManson
SCOOP: The Pentagon has formally notified Anthropic that it’s deemed the artificial intelligence company and its products a risk to the US supply chain, according to a senior defense official. bloomberg.com/news/articles/…
English
jeremy retweetledi


@Miles_Brundage @sammcallister @aidan_mclau @scrollvoid @Miles_Brundage
1. not sure what you mean? but yes, I think people would notice if the models were being misused.
2. yes public info on claude for gov models is more brief, but gov models are similar to prod models that we are more public about in risk reports / model cards.
English

Thanks!
Couple thoughts:
- “hand over” was vague but not sure that doesn’t count? Like would anyone notice if something happened, is one framing
- IIRC the Risk Report basically says you don’t do the same testing as prod - the info there is v hedged/brief vs hundreds of pages per prod model
(To be clear, not saying Claude Gov bad, and in fact I think Claude Gov good, just think this would ideally all be better understood)
English

@CedricWhitney @Miles_Brundage Claude for Gov and the Palantir deployment are effectively the same thing and have the same safeguards & use the same models. Anthropic also works closely with Palantir in + has visibility into the deployments + our applied AI team is in the loop, so it’s not like models are just “tossed over the fence”
English

@Miles_Brundage @jerhadf @sammcallister @aidan_mclau @scrollvoid just for my own clarity, the above refers to only Claude Gov? or also to the Palantir deployment? (@sammcallister @jerhadf - just trying to pierce the fog of war)
English
jeremy retweetledi

Anthropic SCR designation is unfair, unwise, and an extreme overreaction. Anthropic is filled with brilliant hard-working well-intentioned people who truly care about Western civilization & democratic nations success in frontier AI. They are real patriots.
Designating an organization which has contributed so much to pushing AI forward and with so much integrity does not serve the country or humanity well.
English
jeremy retweetledi

@postlabor2030 @Miles_Brundage @sammcallister @aidan_mclau @scrollvoid @postlabor2030 not all of it is proprietary. see some info here: anthropic.com/news/claude-go…
English

@Miles_Brundage @sammcallister @aidan_mclau @scrollvoid I (a senior gov employee at the time) asked an anthropic rep what the technical difference was between gov vs regular claude and they said they couldn't tell me because it was proprietary, fwiw
English

@Miles_Brundage can answer some of this w/ publicly available info:
- we implement realtime (online) classifiers to block uses of concern (like CBRN), as well as offline classifiers. the use of these classifiers has some rare exemptions through a vetting/KYC process
- for claude gov, we don’t hand over the weights; these models are run in the classified gov cloud (ie AWS Secret and AWS Top Secret)
- gov claude models go through the same rigorous safety testing as our prod models
- an important bit is that reduced refusals in a gov context != helpful-only or safeguards-free. for instance claude may refuse to respond with sensitive classified info and gov claude may refuse this less - that is “reduced refusals” but *not* helpful-only.
Claude Gov announcement (Jun 6, 2025): anthropic.com/news/claude-go…
ASL-3 Deployment Safeguards Report (May 2025 PDF): anthropic.com/asl3-deploymen…
Redacted Risk Report Feb 2026 (PDF): anthropic.com/feb-2026-risk-…
Frontier Safety Roadmap: anthropic.com/responsible-sc…
Next-gen Constitutional Classifiers (Jan 9, 2026): anthropic.com/research/next-…
English

@sammcallister @aidan_mclau @scrollvoid FWIW I'm a pretty close observer of what Anthropic has said on this topic + have never heard that before re: classifier stack (and also I don't know what it means exactly - what does "we run" mean - don't you hand over the weights? Who is running what where? Etc.)
English
jeremy retweetledi

@aidan_mclau @scrollvoid This isn't true. Anthropic hasn't offered a "helpful-only" model without safeguards for NatSec use. Claude Gov is a custom model with extra training, including technical safeguards. (We've also had FDEs and researchers implementing it, and we run our own classifier stack.)
English
jeremy retweetledi

As @haydenfield wrote earlier today, "OpenAI’s deal is much softer than the one Anthropic was pushing for."
"Every aspect of it boils down to: If it’s technically legal, then the US military can use OpenAI’s technology to carry it out."
And despite OpenAI's assertions, the DoW *does* conduct domestic surveillance using commercial data.

English
jeremy retweetledi

This is an important point from Logan Koepke: OpenAI is claiming that DoW lacks authorities to get commercial data at scale, despite extensive reporting that they have done so
logan koepke@jlkoepke
@natseckatrina @David_Kasten @sama on point two, they have in fact done this and claim they have the authority to do this. • vice.com/en/article/us-… • nytimes.com/2021/01/22/us/… • static01.nyt.com/newsgraphics/d…
English
jeremy retweetledi

Lots of new, hard to follow details today about the OpenAI-Pentagon deal. Here's a roundup of the most important things about using commercially available data for surveillance on Americans.
TL;DR: It seems the Pentagon wanted Anthropic to allow this, and Anthropic's refusal is what blew up their negotiations. OpenAI is claiming its agreement with the DoD doesn't allow it, but has yet to provide concrete proof — and its executives have incorrectly described the Pentagon's current approach to these practices.
Anthropic and the DoD couldn’t reach a deal because “the Pentagon wanted the company to allow for the collection and analysis of unclassified, commercial bulk data on Americans, such as geolocation and web browsing data”, per the NYT and The Atlantic. Anthropic refused. (1)
OpenAI’s deal with the DoD “does not explicitly prohibit the collection of Americans' publicly available information”, per Axios (and per the portions of the contract that OpenAI has shared publicly). (2)
When asked about "getting and/or analyzing commercially available data at scale", OpenAI's Head of National Security Partnerships @natseckatrina said that the “The Pentagon has no legal authority to do this” (3).
This is false — the DIA already does this, and believes itself to have the legal authority to do so — it's said that it “does not construe the Carpenter decision to require a judicial warrant endorsing purchase or use of commercially-available data for intelligence purposes" (4)
When asked about this another time, @natseckatrina gave a different answer: "We can't protect against a government agency buying commercially available data sets, but our contract incorporates a prohibition on mass domestic surveillance as a binding condition of use." (5)
@boazbaraktcs, who works on safety and alignment at OpenAI, addressed the matter too.
He said: "The DoW is prohibited by law from engaging in any domestic mass surveillance, and @USWREMichael wrote that it would be profoundly un-American to do so, including for analyzing communication of Americans by purchasing data from commercial sources. Hence us and the DoW see eye to eye in our interpretation of domestic mass surveillance. They have no desire to do this, and we have no intention to allow it." (6)
He also said "The DoW has not asked us to support collection or analysis of bulk data on Americans, such as geolocation data, web browsing data and personal financial information purchased from data brokers, and our agreement does not permit it." (7)
When asked for the clause in the contract that showed this, he said: "Our legal and policy teams have worked with the DoW and this interpretation is shared between both sides. They will provide more details on the the issue of commercially acquired datasets in the coming days." (8)
One other thing worth noting here is that @natseckatrina has said the OpenAI-DoD deal does not allow NSA Title 50 work. But as far as I understand, the DIA's use of commercial data is Title 10 work. Regardless, it's unclear how the NSA is excluded from the contract, or if the exclusion also applies to the DoD's other intelligence agencies (e.g. DIA). (9)
Sources:
(1): nytimes.com/2026/03/01/tec…
theatlantic.com/technology/202…
(2): axios.com/2026/03/01/ope…
(3): x.com/natseckatrina/…
(4): thehill.com/policy/nationa…
(5): x.com/natseckatrina/…
(6): x.com/boazbaraktcs/s…
(7): x.com/boazbaraktcs/s…
(8): x.com/boazbaraktcs/s…
(9): x.com/natseckatrina/…
English
jeremy retweetledi
jeremy retweetledi

the contract snippet from the openai dow blog post is so obviously just "all lawful use" followed by a bunch of stuff that is not really operative except as window dressing.
the referenced DoD Directive 3000.09 basically says the DoD gets to decide when autonomous weapons systems are deployable.
as others have covered, there are a ton of mass domestic surveillance loopholes not covered by the 4A, national security act, FISA, etc.
English
jeremy retweetledi

Claude is #1 in the App Store today — I want to say a huge thank you to all of our new (and existing!) users for the support. We’re working hard for you, please share your thoughts and feedback along the way.

English
jeremy retweetledi















