Jonathan Stray

24.8K posts

Jonathan Stray banner
Jonathan Stray

Jonathan Stray

@jonathanstray

Knowing things is a solved problem. Getting along is not. Working on AI, media, and inter-group conflict @CHAI_Berkeley. Got here from computational journalism.

Berkeley Katılım Mayıs 2008
2K Takip Edilen10.5K Takipçiler
Sabitlenmiş Tweet
Jonathan Stray
Jonathan Stray@jonathanstray·
We proved that social media algorithms can be designed to reduce polarization -- and this might not even reduce engagement. We did a 10,000 person field experiment with five different algorithms on three different platforms. Results soon, here's a preview. rankingchallenge.substack.com/p/experiment-f…
English
2
6
25
2.9K
Jonathan Stray
Jonathan Stray@jonathanstray·
@MicahCarroll Of course, but that's a well-known problem with alignment-to-preferences too!
English
0
0
0
12
Jonathan Stray
Jonathan Stray@jonathanstray·
Alignment without Preferences I've never been happy with the concept of "preferences." Just not a very good model for how humans choose. Is there a way to define AI alignment without using preferences at all? I think there is: alignment is when we approve of what the machine did, retrospectively. Here's the talk I gave on this at @CHAI_Berkeley docs.google.com/presentation/d…
Jonathan Stray tweet media
English
2
2
11
516
Jonathan Stray
Jonathan Stray@jonathanstray·
Best analysis yet from @TheZvi: "Altman had a moment of huge leverage, and instead of standing with Anthropic, he caved on the key term in question, ‘all lawful use.’ ... I can only interpret OpenAI’s public statements ... as saying that OpenAI does not view legal surveillance and analysis activities (or legal use of autonomous weapons) as crossing their red lines." thezvi.substack.com/p/secretary-of…
English
1
0
5
431
Jonathan Stray
Jonathan Stray@jonathanstray·
I appreciate the sentiment, but my problem is this: OpenAI executed poorly on a matter intricately intertwined with fundamental rights. This from a company which handles an insane amount private data and is only expected to become more powerful. Regardless of the reason for their shoddy work, I must now consider them a serious risk to civil liberties. Compare this with, e.g. Apple or Google's relationship to the government.
English
1
0
7
490
Dean W. Ball
Dean W. Ball@deanwball·
I do not share the cynicism of some with respect to OpenAI’s actions in the DoW/Ant dispute. It basically seems to me as though OpenAI was attempting to deescalate last week; whether they executed well is a separate question, but in their defense good execution in such chaos was nearly impossible. But from where I sit it seems OpenAI tried to reduce tensions and find a productive path forward, while allowing its employees considerable latitude to speak their minds. The easy thing would have been for management to stay quiet and let this happen; they did not do that, and they also stood firm in opposition to the supply-chain risk designation. In general, OpenAI is unjustly maligned. This is the thing that bothers me the most about Dario’s leaked memo; it spends so much time on OpenAI conspiracies and cynicism that I fear industry solidarity in the future will be harder than it needs to be. This is not the last time we will see state interference into frontier AI, and until we build formalized structures for such interference it will be important for the industry to hang tough together. I fear that will be less likely now.
English
39
40
520
41.7K
Jonathan Stray
Jonathan Stray@jonathanstray·
@IanBaer Are you suggesting that's me? Because my point here is precisely the opposite.
English
1
0
0
54
Ian
Ian@IanBaer·
There’s a certain type of guy that doesn’t believe the CIA has ever operated domestically, because that’s illegal. I’m not a hater, if society was only that guy, it would be better. But still, surprising to see.
Jonathan Stray@jonathanstray

@tszzl I sincerely hope all the haters are wrong and DoW doesn't end up using OpenAI tech to spy on all of us, or maybe identify targets for autonomous weapons. Because the current contract prevents neither. Do you disagree?

English
1
0
1
121
Jonathan Stray
Jonathan Stray@jonathanstray·
@tszzl This is precisely the problem. Past experience shows there is almost no way to language this meaningfully. I spent years of my life reporting on the NSA, and one thing I learned is that the word games they play are just insane. transformernews.ai/p/openai-penta…
English
3
0
58
6.1K
roon
roon@tszzl·
@jonathanstray I think the close readings of the contract language is a nerd trap when the counterparty is the pentagon rather than like Goldman Sachs
English
16
8
235
46.2K
roon
roon@tszzl·
feeling sort of gullible today, maybe due to selection effects
English
76
31
1.2K
72.8K
Jonathan Stray
Jonathan Stray@jonathanstray·
@tszzl I sincerely hope all the haters are wrong and DoW doesn't end up using OpenAI tech to spy on all of us, or maybe identify targets for autonomous weapons. Because the current contract prevents neither. Do you disagree?
English
2
0
31
5.2K
roon
roon@tszzl·
have to say I really enjoy these crashouts. it’s pretty kino to read communication that’s poorly calculated and wasn’t meant for your eyes. so few today are able to speak in these sweeping Shakespearean terms that they fucking hate their competitor and let it blind their calculus
English
16
6
395
30.6K
Jonathan Stray
Jonathan Stray@jonathanstray·
It's very unclear whether OpenAI's contract with the DoW allows bulk analysis of Americans' data. This is what DoW wanted and Anthropic refused. Even the revised contract language is ambiguous at best. transformernews.ai/p/openai-penta…
English
1
1
8
605
Jonathan Stray
Jonathan Stray@jonathanstray·
OpenAI's renegotiated contract with DoW is an improvement. The problem is they initially signed a contract with no such protections, and then falsely bragged that it had better protections than Anthropic asked for. So leadership either did not know or did not care that the original contract was unacceptable. Either way, I now must imagine that OpenAI is a significant risk to human rights.
English
2
2
19
1.4K
Jonathan Stray
Jonathan Stray@jonathanstray·
It's an improvement. The problem with this is that OpenAI initially signed a contract which had no such protections. And then bragged that it had better protections than Anthropic was asking for. So leadership either did not know or did not care that the original contract was unacceptable. Either way, I now must imagine that OpenAI is a significant risk to human rights.
English
0
0
0
43
Noam Brown
Noam Brown@polynoamial·
tl;dr: @OpenAI will not be deploying to the NSA or other DoW intelligence agencies for now, so that there's time to address potential surveillance loopholes through the democratic process. Over the weekend it became clear that the original language in the OpenAI / DoW agreement left legitimate questions unanswered, especially around some novel ways that AI could potentially enable legal surveillance. The language is now updated to address this, but I also strongly believe that the world should not have to rely on trust in AI labs or intelligence agencies for their safety and security. Deployment to the NSA and all other DoW intelligence agencies will be withheld so that there is time to address these loopholes through the democratic process before deployment. I know that legislation can sometimes be slow, but I'm afraid of a slippery slope where we become accustomed to circumventing the democratic process for important policy decisions. When there is bipartisan support and urgency, I have faith that government can act quickly. And as AI becomes more powerful, it's more important than ever that ultimate authority be vested in the public. I am also planning to become more personally involved with policy at OpenAI. I think now more than ever it's important for researchers to be in the loop so that policy is informed of the extremely fast progress we are seeing.
Sam Altman@sama

Here is re-post of an internal post: We have been working with the DoW to make some additions in our agreement to make our principles very clear. 1. We are going to amend our deal to add this language, in addition to everything else: "• Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals. • For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information." It’s critical to protect the civil liberties of Americans, and there was so much focus on this, that we wanted to make this point especially clear, including around commercially acquired information. Just like everything we do with iterative deployment, we will continue to learn and refine as we go. I think this is an important change; our team and the DoW team did a great job working on it. 2. The Department also affirmed that our services will not be used by Department of War intelligence agencies (for example, the NSA). Any services to those agencies would require a follow-on modification to our contract. 3. For extreme clarity: we want to work through democratic processes. It should be the government making the key decisions about society. We want to have a voice, and a seat at the table where we can share our expertise, and to fight for principles of liberty. But we are clear on how the system works (because a lot of people have asked, if I received what I believed was an unconstitutional order, of course I would rather go to jail than follow it). But 4. There are many things the technology just isn’t ready for, and many areas we don’t yet understand the tradeoffs required for safety. We will work through these, slowly, with the DoW, with technical safeguards and other methods. 5. One thing I think I did wrong: we shouldn't have rushed to get this out on Friday. The issues are super complex, and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy. Good learning experience for me as we face higher-stakes decisions in the future. In my conversations over the weekend, I reiterated that Anthropic should not be designated as a SCR, and that we hope the DoW offers them the same terms we’ve agreed to. We will host an All Hands tomorrow morning to answer more questions.

English
149
85
926
240.7K
Jonathan Stray
Jonathan Stray@jonathanstray·
@bradrcarson New language excludes Title 50 "agencies." But Title 10 units (military intelligence) is not excluded. Nor are fully autonomous weapons (Directive 3000.09 allows them).
English
2
0
1
132
Brad Carson
Brad Carson@bradrcarson·
So I'll go out on a limb here. The more I think about it, the more I say: No way DOW put in a contract that GPT will not be used by any DOW intel agency. Why? First, intel fusion (not surveillance) is probably the #1 use case of AI. To say the IC won't use GPT is stunning. 1/N
English
6
6
100
6.8K
Jonathan Stray
Jonathan Stray@jonathanstray·
I still have many concerns. - Fully autonomous weapons are still on the table. Directive 3000.09 allows this. - What about Title 10 intelligence? Or non-"targeted" surveillance? Lots of loopholes left. - OpenAI's Friday press release saying that these uses were prohibited by contract was a blatant lie. That's the fundamental problem for your company now.
English
0
0
3
174
NatSecKatrina
NatSecKatrina@natseckatrina·
Whatever else can be said about the past 72+ hours, the protections we've shared give other AI labs a better starting place on the issues we all care about (surveillance + autonomous weapons) than they had last week.
English
31
2
92
13.6K
Jonathan Stray
Jonathan Stray@jonathanstray·
@mckbrando He signed a contract that would allow the govt to do two of the most concerning possible things (mass surveillance, autonomous weapons) and then sent out a press release that blatantly lied about this fact. I feel for the difficult position he's in, but he blew it.
English
1
0
1
102
Brandon McKinzie
Brandon McKinzie@mckbrando·
It has been super reassuring to see all the open dialogue on Slack the last few days. I have a lot of respect for Sam's ability to be inundated with critical feedback, listen to that feedback, and work hard to make things right. Happy to work at OpenAI.
Sam Altman@sama

Here is re-post of an internal post: We have been working with the DoW to make some additions in our agreement to make our principles very clear. 1. We are going to amend our deal to add this language, in addition to everything else: "• Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals. • For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information." It’s critical to protect the civil liberties of Americans, and there was so much focus on this, that we wanted to make this point especially clear, including around commercially acquired information. Just like everything we do with iterative deployment, we will continue to learn and refine as we go. I think this is an important change; our team and the DoW team did a great job working on it. 2. The Department also affirmed that our services will not be used by Department of War intelligence agencies (for example, the NSA). Any services to those agencies would require a follow-on modification to our contract. 3. For extreme clarity: we want to work through democratic processes. It should be the government making the key decisions about society. We want to have a voice, and a seat at the table where we can share our expertise, and to fight for principles of liberty. But we are clear on how the system works (because a lot of people have asked, if I received what I believed was an unconstitutional order, of course I would rather go to jail than follow it). But 4. There are many things the technology just isn’t ready for, and many areas we don’t yet understand the tradeoffs required for safety. We will work through these, slowly, with the DoW, with technical safeguards and other methods. 5. One thing I think I did wrong: we shouldn't have rushed to get this out on Friday. The issues are super complex, and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy. Good learning experience for me as we face higher-stakes decisions in the future. In my conversations over the weekend, I reiterated that Anthropic should not be designated as a SCR, and that we hope the DoW offers them the same terms we’ve agreed to. We will host an All Hands tomorrow morning to answer more questions.

English
15
4
138
10K
corsaren
corsaren@corsaren·
@MrJTroyer I think OAI’s position (from their blogpost) is that cloud deployment makes autonomous kill vehicles essentially nonviable due to latency, but I also heard that Anthropic disagreed with this assessment? There may be a basic technology capability disagreement there.
English
3
0
5
639
corsaren
corsaren@corsaren·
Worth asking yourself if you would have predicted this outcome given how you were modeling Sam/OpenAIs motives and behaviors over the last couple days A lot of people seemed to think that the weak parts of the og contract language were intentional and duplicitous
Sam Altman@sama

Here is re-post of an internal post: We have been working with the DoW to make some additions in our agreement to make our principles very clear. 1. We are going to amend our deal to add this language, in addition to everything else: "• Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals. • For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information." It’s critical to protect the civil liberties of Americans, and there was so much focus on this, that we wanted to make this point especially clear, including around commercially acquired information. Just like everything we do with iterative deployment, we will continue to learn and refine as we go. I think this is an important change; our team and the DoW team did a great job working on it. 2. The Department also affirmed that our services will not be used by Department of War intelligence agencies (for example, the NSA). Any services to those agencies would require a follow-on modification to our contract. 3. For extreme clarity: we want to work through democratic processes. It should be the government making the key decisions about society. We want to have a voice, and a seat at the table where we can share our expertise, and to fight for principles of liberty. But we are clear on how the system works (because a lot of people have asked, if I received what I believed was an unconstitutional order, of course I would rather go to jail than follow it). But 4. There are many things the technology just isn’t ready for, and many areas we don’t yet understand the tradeoffs required for safety. We will work through these, slowly, with the DoW, with technical safeguards and other methods. 5. One thing I think I did wrong: we shouldn't have rushed to get this out on Friday. The issues are super complex, and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy. Good learning experience for me as we face higher-stakes decisions in the future. In my conversations over the weekend, I reiterated that Anthropic should not be designated as a SCR, and that we hope the DoW offers them the same terms we’ve agreed to. We will host an All Hands tomorrow morning to answer more questions.

English
17
2
138
36.4K
Jonathan Stray
Jonathan Stray@jonathanstray·
@UnderSecretaryF @sama @DeptofWar What about Title 10 intelligence? What about non-"targeted" e.g. bulk analysis of commercially available data (purchasable browser history, location, etc)? If domestic surveillance is "generally" not permitted, when is it permitted?
English
0
0
2
205
Senior Official Jeremy Lewin
Senior Official Jeremy Lewin@UnderSecretaryF·
As @sama notes, the OpenAI - @DeptofWar contract now includes critical new language to accomplish two mutual and related goals (limits on domestic surveillance while upholding democratic and sovereign control over the use of integrated systems): - Most importantly, it reflects DoWs commitment not to use GPT for domestic surveillance of Americans, and does so in a way that is serious and specific. As DoW has been saying, their authorities generally do not permit such activity and it has never been an object of these AI model contracts. That being said, OpenAI rightfully wanted to make sure that was captured clearly. The new language includes limits related both to the use of commercially available information for targeted surveillance and related to integration into Title 50 IC community components. To be clear: the government intends to honor the contract as written, including its limitations. - At the same time, the contract delineates its limitations while continuing to reflect a respect for law and the democratic process. By defining prohibited surveillance practices in a very specific and discrete manner—and yes, through continued references to our legal and constitutional framework—the contract avoids the serious governance concerns associated with vaguer or more discretionary prohibitions. Vaguer provisions, unmoored from legal definitions and authorities, would both inappropriately vest too much interpretive discretion in a private counterparty and risk inadvertent abuse or violations which, if unaddressed, would undermine, rather than bolster, usage limitations. - As many have recognized, it is simply intolerable for the government to integrate into its most sensitive operations a system which is subject to the subjective interpretation of broad and indefinite terms of service by a private company. The solution isn’t more breadth but rather specificity—grounded both in legal authorities and technical capabilities. The revised contract, in its totality, sets a new industry standard for thoughtfulness in this regard. - Ultimately, as Sam has said, there are myriad policy questions to be answered in the responsible use of AI, both inside the government and in the private and commercial domains. In a democratic society, these questions must be answered through legal processes—at the ballot box, in the courts, through regulatory rulemaking, and in Congress. This shouldn’t have to be repeated as much as it has. We remain grateful for @OpenAI’s partnership, and that of @xAI, in building great tools to help protect our nation and its great warfighters. America’s AI leadership and national security are inextricably linked. God bless America and our troops 🇺🇸 PS — yes this is legalistic. But that’s what a contract is. It’s a legal agreement. Using less “legalistic” words doesn’t clarify meaning—it leaves it open for greater dispute and misunderstanding later.
Sam Altman@sama

Here is re-post of an internal post: We have been working with the DoW to make some additions in our agreement to make our principles very clear. 1. We are going to amend our deal to add this language, in addition to everything else: "• Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals. • For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information." It’s critical to protect the civil liberties of Americans, and there was so much focus on this, that we wanted to make this point especially clear, including around commercially acquired information. Just like everything we do with iterative deployment, we will continue to learn and refine as we go. I think this is an important change; our team and the DoW team did a great job working on it. 2. The Department also affirmed that our services will not be used by Department of War intelligence agencies (for example, the NSA). Any services to those agencies would require a follow-on modification to our contract. 3. For extreme clarity: we want to work through democratic processes. It should be the government making the key decisions about society. We want to have a voice, and a seat at the table where we can share our expertise, and to fight for principles of liberty. But we are clear on how the system works (because a lot of people have asked, if I received what I believed was an unconstitutional order, of course I would rather go to jail than follow it). But 4. There are many things the technology just isn’t ready for, and many areas we don’t yet understand the tradeoffs required for safety. We will work through these, slowly, with the DoW, with technical safeguards and other methods. 5. One thing I think I did wrong: we shouldn't have rushed to get this out on Friday. The issues are super complex, and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy. Good learning experience for me as we face higher-stakes decisions in the future. In my conversations over the weekend, I reiterated that Anthropic should not be designated as a SCR, and that we hope the DoW offers them the same terms we’ve agreed to. We will host an All Hands tomorrow morning to answer more questions.

English
8
7
60
11.7K