jeremy

6.3K posts

jeremy banner
jeremy

jeremy

@jerhadf

@AnthropicAI. personal views only

New York, NY Katılım Ocak 2014
1.6K Takip Edilen3.3K Takipçiler
jeremy retweetledi
Felix Rieseberg
Felix Rieseberg@felixrieseberg·
A small ship I love: We made Claude.ai and our desktop apps meaningful faster this week. We moved our architecture from SSR to a static @vite_js & @tan_stack router setup that we can serve straight from workers at the edge. Time to first byte is down 65% at p75, prompts show up 50% sooner, navigation is snappier. We're not done (not even close!) but we care and we'll keep chipping away. Aiming to make Claude a little better every day.
English
72
68
1.7K
216.5K
jeremy retweetledi
Thariq
Thariq@trq212·
i think we might have undersold 1M context tbh, the performance is so so good, I really just don't clear the context window much these days
Thariq tweet media
English
262
68
2.1K
91.3K
jeremy retweetledi
Logan Graham
Logan Graham@logangraham·
Back in ~November, our team picked a stretch goal of seeing if we could find and fix vulnerabilities in Firefox with Opus 4.6. In 2 weeks, we found 22, and ~1/5th of all high severity CVEs in a year. For our team, this feels like a rubicon moment.
Logan Graham tweet media
English
16
51
344
29.6K
James Brady
James Brady@james_elicit·
Is anybody else getting absolutely bonkers hallucinations from Claude!? I just tried to check a couple of things off my todo list 😅
James Brady tweet mediaJames Brady tweet mediaJames Brady tweet media
English
5
2
45
20K
jeremy retweetledi
Jasmine Wang
Jasmine Wang@j_asminewang·
I would really appreciate if independent legal counsel could redteam this contract modification language
Sam Altman@sama

Here is re-post of an internal post: We have been working with the DoW to make some additions in our agreement to make our principles very clear. 1. We are going to amend our deal to add this language, in addition to everything else: "• Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals. • For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information." It’s critical to protect the civil liberties of Americans, and there was so much focus on this, that we wanted to make this point especially clear, including around commercially acquired information. Just like everything we do with iterative deployment, we will continue to learn and refine as we go. I think this is an important change; our team and the DoW team did a great job working on it. 2. The Department also affirmed that our services will not be used by Department of War intelligence agencies (for example, the NSA). Any services to those agencies would require a follow-on modification to our contract. 3. For extreme clarity: we want to work through democratic processes. It should be the government making the key decisions about society. We want to have a voice, and a seat at the table where we can share our expertise, and to fight for principles of liberty. But we are clear on how the system works (because a lot of people have asked, if I received what I believed was an unconstitutional order, of course I would rather go to jail than follow it). But 4. There are many things the technology just isn’t ready for, and many areas we don’t yet understand the tradeoffs required for safety. We will work through these, slowly, with the DoW, with technical safeguards and other methods. 5. One thing I think I did wrong: we shouldn't have rushed to get this out on Friday. The issues are super complex, and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy. Good learning experience for me as we face higher-stakes decisions in the future. In my conversations over the weekend, I reiterated that Anthropic should not be designated as a SCR, and that we hope the DoW offers them the same terms we’ve agreed to. We will host an All Hands tomorrow morning to answer more questions.

English
23
19
440
74.4K
jeremy
jeremy@jerhadf·
@Miles_Brundage @sammcallister @aidan_mclau @scrollvoid @Miles_Brundage 1. not sure what you mean? but yes, I think people would notice if the models were being misused. 2. yes public info on claude for gov models is more brief, but gov models are similar to prod models that we are more public about in risk reports / model cards.
English
1
0
2
144
Miles Brundage
Miles Brundage@Miles_Brundage·
Thanks! Couple thoughts: - “hand over” was vague but not sure that doesn’t count? Like would anyone notice if something happened, is one framing - IIRC the Risk Report basically says you don’t do the same testing as prod - the info there is v hedged/brief vs hundreds of pages per prod model (To be clear, not saying Claude Gov bad, and in fact I think Claude Gov good, just think this would ideally all be better understood)
English
2
0
8
527
Aidan McLaughlin
Aidan McLaughlin@aidan_mclau·
i personally don’t think this deal was worth it
English
204
104
3K
527.7K
jeremy
jeremy@jerhadf·
@CedricWhitney @Miles_Brundage Claude for Gov and the Palantir deployment are effectively the same thing and have the same safeguards & use the same models. Anthropic also works closely with Palantir in + has visibility into the deployments + our applied AI team is in the loop, so it’s not like models are just “tossed over the fence”
English
0
0
7
433
jeremy retweetledi
Mo Bavarian
Mo Bavarian@mobav0·
Anthropic SCR designation is unfair, unwise, and an extreme overreaction. Anthropic is filled with brilliant hard-working well-intentioned people who truly care about Western civilization & democratic nations success in frontier AI. They are real patriots. Designating an organization which has contributed so much to pushing AI forward and with so much integrity does not serve the country or humanity well.
English
14
32
409
35.5K
jeremy retweetledi
Alex Albert
Alex Albert@alexalbert__·
I go offline for one week: - DoW saga - everyone from trump to katy perry posts about us - Claude hits 1 on app store - sign-up records broken, servers are melting - oh also we shipped a ton of new stuff in record time starting to think this is what the takeoff feels like
English
75
27
1.3K
53K
jeremy
jeremy@jerhadf·
@Miles_Brundage can answer some of this w/ publicly available info: - we implement realtime (online) classifiers to block uses of concern (like CBRN), as well as offline classifiers. the use of these classifiers has some rare exemptions through a vetting/KYC process - for claude gov, we don’t hand over the weights; these models are run in the classified gov cloud (ie AWS Secret and AWS Top Secret) - gov claude models go through the same rigorous safety testing as our prod models - an important bit is that reduced refusals in a gov context != helpful-only or safeguards-free. for instance claude may refuse to respond with sensitive classified info and gov claude may refuse this less - that is “reduced refusals” but *not* helpful-only. Claude Gov announcement (Jun 6, 2025): anthropic.com/news/claude-go… ASL-3 Deployment Safeguards Report (May 2025 PDF): anthropic.com/asl3-deploymen… Redacted Risk Report Feb 2026 (PDF): anthropic.com/feb-2026-risk-… Frontier Safety Roadmap: anthropic.com/responsible-sc… Next-gen Constitutional Classifiers (Jan 9, 2026): anthropic.com/research/next-…
English
2
6
75
3.9K
Miles Brundage
Miles Brundage@Miles_Brundage·
@sammcallister @aidan_mclau @scrollvoid FWIW I'm a pretty close observer of what Anthropic has said on this topic + have never heard that before re: classifier stack (and also I don't know what it means exactly - what does "we run" mean - don't you hand over the weights? Who is running what where? Etc.)
English
4
0
70
5K
jeremy retweetledi
sam mcallister
sam mcallister@sammcallister·
@aidan_mclau @scrollvoid This isn't true. Anthropic hasn't offered a "helpful-only" model without safeguards for NatSec use. Claude Gov is a custom model with extra training, including technical safeguards. (We've also had FDEs and researchers implementing it, and we run our own classifier stack.)
English
15
37
554
127.3K
jeremy retweetledi
Shakeel
Shakeel@ShakeelHashim·
As @haydenfield wrote earlier today, "OpenAI’s deal is much softer than the one Anthropic was pushing for." "Every aspect of it boils down to: If it’s technically legal, then the US military can use OpenAI’s technology to carry it out." And despite OpenAI's assertions, the DoW *does* conduct domestic surveillance using commercial data.
Shakeel tweet media
English
1
8
74
8.4K
jeremy retweetledi
Shakeel
Shakeel@ShakeelHashim·
Lots of new, hard to follow details today about the OpenAI-Pentagon deal. Here's a roundup of the most important things about using commercially available data for surveillance on Americans. TL;DR: It seems the Pentagon wanted Anthropic to allow this, and Anthropic's refusal is what blew up their negotiations. OpenAI is claiming its agreement with the DoD doesn't allow it, but has yet to provide concrete proof — and its executives have incorrectly described the Pentagon's current approach to these practices. Anthropic and the DoD couldn’t reach a deal because “the Pentagon wanted the company to allow for the collection and analysis of unclassified, commercial bulk data on Americans, such as geolocation and web browsing data”, per the NYT and The Atlantic. Anthropic refused. (1) OpenAI’s deal with the DoD “does not explicitly prohibit the collection of Americans' publicly available information”, per Axios (and per the portions of the contract that OpenAI has shared publicly). (2) When asked about "getting and/or analyzing commercially available data at scale", OpenAI's Head of National Security Partnerships @natseckatrina said that the “The Pentagon has no legal authority to do this” (3). This is false — the DIA already does this, and believes itself to have the legal authority to do so — it's said that it “does not construe the Carpenter decision to require a judicial warrant endorsing purchase or use of commercially-available data for intelligence purposes" (4) When asked about this another time, @natseckatrina gave a different answer: "We can't protect against a government agency buying commercially available data sets, but our contract incorporates a prohibition on mass domestic surveillance as a binding condition of use." (5) @boazbaraktcs, who works on safety and alignment at OpenAI, addressed the matter too. He said: "The DoW is prohibited by law from engaging in any domestic mass surveillance, and @USWREMichael wrote that it would be profoundly un-American to do so, including for analyzing communication of Americans by purchasing data from commercial sources. Hence us and the DoW see eye to eye in our interpretation of domestic mass surveillance. They have no desire to do this, and we have no intention to allow it." (6) He also said "The DoW has not asked us to support collection or analysis of bulk data on Americans, such as geolocation data, web browsing data and personal financial information purchased from data brokers, and our agreement does not permit it." (7) When asked for the clause in the contract that showed this, he said: "Our legal and policy teams have worked with the DoW and this interpretation is shared between both sides. They will provide more details on the the issue of commercially acquired datasets in the coming days." (8) One other thing worth noting here is that @natseckatrina has said the OpenAI-DoD deal does not allow NSA Title 50 work. But as far as I understand, the DIA's use of commercial data is Title 10 work. Regardless, it's unclear how the NSA is excluded from the contract, or if the exclusion also applies to the DoD's other intelligence agencies (e.g. DIA). (9) Sources: (1): nytimes.com/2026/03/01/tec… theatlantic.com/technology/202… (2): axios.com/2026/03/01/ope… (3): x.com/natseckatrina/… (4): thehill.com/policy/nationa… (5): x.com/natseckatrina/… (6): x.com/boazbaraktcs/s… (7): x.com/boazbaraktcs/s… (8): x.com/boazbaraktcs/s… (9): x.com/natseckatrina/…
English
7
75
361
42.8K
jeremy retweetledi
Leo Gao
Leo Gao@nabla_theta·
the contract snippet from the openai dow blog post is so obviously just "all lawful use" followed by a bunch of stuff that is not really operative except as window dressing. the referenced DoD Directive 3000.09 basically says the DoD gets to decide when autonomous weapons systems are deployable. as others have covered, there are a ton of mass domestic surveillance loopholes not covered by the 4A, national security act, FISA, etc.
English
13
58
884
163.2K
jeremy retweetledi
Mike Krieger
Mike Krieger@mikeyk·
Claude is #1 in the App Store today — I want to say a huge thank you to all of our new (and existing!) users for the support. We’re working hard for you, please share your thoughts and feedback along the way.
Mike Krieger tweet media
English
267
498
6.4K
413.9K
jeremy retweetledi
julia
julia@mooncat_is·
This is technically lawful under existing mass surveillance law, and OAIs statement as written allows for this. I’m glad we set a red line here. I wouldn’t want my legacy to be enabling the end of freedom and privacy, and its replacement with panopticon. I hope lawmakers act.
julia tweet media
English
8
25
256
14.1K