
Who's really in control? THE AI DOC: OR HOW I BECAME AN APOCALOPTIMIST is only in theaters March 27.
Evan Hubinger
689 posts

@EvanHub
Alignment Stress-Testing lead @AnthropicAI. Opinions my own. Previously: MIRI, OpenAI, Google, Yelp, Ripple. (he/him/his)

Who's really in control? THE AI DOC: OR HOW I BECAME AN APOCALOPTIMIST is only in theaters March 27.

'But the rate of change is such that Anthropic co-founder and chief science officer Jared Kaplan, as well as some external experts, believes fully automated Al research could be as little as a year away.'


I'm not a doomer but it's still surreal to tell incredulous normies "yes, a significant number of prominent experts really do believe that superintelligent AI is on the verge of killing everyone."


The government can already legally buy your location data, browsing history, and social media activity without a warrant. The only thing that prevented mass surveillance from that data was the inability to process it all. LLMs fix that. “All lawful purposes” includes this.

@aidan_mclau @scrollvoid This isn't true. Anthropic hasn't offered a "helpful-only" model without safeguards for NatSec use. Claude Gov is a custom model with extra training, including technical safeguards. (We've also had FDEs and researchers implementing it, and we run our own classifier stack.)






NEW: When OpenAI announced its Pentagon deal Friday night, people immediately challenged Sam Altman's claims. Why, they asked, would the DoD suddenly agree to red lines when it had said it would never do so? The answer, sources told me, is that it didn't. theverge.com/ai-artificial-…

In 2021 the Pentagon's Defense Intelligence Agency told Senator Wyden it was purchasing geolocation data from commercial brokers harvested from cell phones and that it did not believe it needed a warrant to analyze American's data. This has to be part of what freaked Dario out.



you’re conflating two things there are DoW restrictions, which are bound to lawful use. there’s what we deliver technically, which is not required to be for every lawful use. there’s no “breach.” also, whatever Anthropic gets in a contract, they have to decide what matters outside of the existing law, and what to enforce, and when. that’s trust me bro, but in a way that sits outside the democratic process.

The real question is whether OpenAI is going to allow the use of AI on unclassified commercial bulk data on Americans, which is what the Pentagon wanted from Anthropic. Ant instead narrowed to classified FISA only, and got kicked.



@natseckatrina @David_Kasten @sama on point two, they have in fact done this and claim they have the authority to do this. • vice.com/en/article/us-… • nytimes.com/2021/01/22/us/… • static01.nyt.com/newsgraphics/d…
