Sabitlenmiş Tweet
Aaron Scher
889 posts

Aaron Scher
@aaronscher
Technical AI Governance research @MIRIBerkeley Speaking only for myself
Katılım Kasım 2015
610 Takip Edilen369 Takipçiler

@LibertarianLars I've thought about this question a lot! Well mainly the "how do you make sure they are actually stopped if they say so" question. arxiv.org/abs/2506.15867 arxiv.org/abs/2511.10783
English

@aaronscher How will you get Chinese companies to stop, so you will have us all working for China? Not too bright, are you?
English

I’m really excited about the Main Ask for this march: every CEO must publicly commit to pausing frontier AI development if every other lab does the same.
Michaël Trazzi@MichaelTrazzi
On our way to OpenAI!
English

@BogdanIonutCir2 @OptiMiserJoe @MIRIBerkeley @LisaThiergart Yeah, this seems plausible to me (though risky). I would love to live in a world where we had a pause that we were sure would last 10 years and the key policy question was how we wanted to allocate a trillion dollar investment pool between AI agents and WBEs.
English

@OptiMiserJoe @MIRIBerkeley @aaronscher @LisaThiergart my best guess for what to do with a short-ish pause is probably a mix of automating prosaic AI safety research first, probably with something like AI agents; and pushing much more for [including lo-fi] WBEs to automate the other, harder to verify x-risk-relevant research
English

If world leaders agree to halt or limit AI development, how do we verify that nations are actually keeping their commitments?
Joe Rogero writes about the three goals for verification mechanisms identified by Technical Governance researchers @aaronscher and @LisaThiergart👇

English

@BogdanIonutCir2 @OptiMiserJoe @MIRIBerkeley @LisaThiergart I am happy to hear plans you think are better!
English

@OptiMiserJoe @MIRIBerkeley @aaronscher @LisaThiergart doesnt't seem that obvious to me, but in any case, it seems worth pointing out that even this relatively large ask would still be a relatively small respite during which something else must be done to make things better longer-term x.com/BogdanIonutCir…
Bogdan Ionut Cirstea@BogdanIonutCir2
@MIRIBerkeley @aaronscher @LisaThiergart (I also have doubts whether the benefits outweigh the risks - e.g. power concentrantion - but that's a separate broader topic)
English

@OlesSeymour It's not just CEOs! E.g., Turing award winners Yoshua Bengio and Geoffrey Hinton are worried about this. As are hundreds of other non-industry experts. Personally, I've thought a lot about this and think the arguments stand for themselves. aistatement.com
English

@aaronscher This what happens when CEO’s over hype the danger of AI.
English

@swombat I am not aware of such a march and I think that's a bad thing to march for. Cancer cures do not have the side effect of killing every human on earth. ifanyonebuildsit.com
English

There's also a march where they want every cancer researcher to stop researching cancer cures if every other cancer researcher does the same.
Aaron Scher@aaronscher
I’m really excited about the Main Ask for this march: every CEO must publicly commit to pausing frontier AI development if every other lab does the same.
English

@GalacticMindHQ I agree that the outcome is not already decided. But we’re on a trajectory where everybody will die. We need to get off that trajectory, that’s how we unseal our fate. If you want to engage with the long form version of these arguments I suggest reading ifanyonebuildsit.com
English

@aaronscher Calling AI development “anti human” assumes the outcome is already decided..
Every major technology or tool has carried risk.. even fire
What matters is how we guide and use it
We have the opportunity to transform humanity for the better .. past even survival economics
English

This is anti human and anti progress
Aaron Scher@aaronscher
I’m really excited about the Main Ask for this march: every CEO must publicly commit to pausing frontier AI development if every other lab does the same.
English

@CHUBBYdotAI @justjayvi Second, yes, there’s plenty of good things to build! Including with current AIs (which would be fine to keep using, maybe subject to some monitoring).
English

@CHUBBYdotAI @justjayvi First I want to object to the frame. If somebody was like “option a: nothing, option b: I give you a million dollars today and kill you tomorrow”, we would basically all choose option a! Economic growth doesn’t matter if we’re dead. 1/2
English

@kyleshannon The basic problem with going forward is that we would all die. We don’t understand the AIs we’re building and we don’t know how to get them to have good goals. Powerful systems with the wrong goals would wipe us out. If you haven’t read it, I recommend ifanyonebuildsit.com
English

And are you anticipating that China will join the pause?
In the history of humanity, the march of technological advancement has never paused.
The fear is the same each time, and I understand it may feel different with with AI, but I can't see a scenario where anything slows down.
We didn't ask for it. It's here. The only way forward is through. No?
English

@kyleshannon Yes, for this to work, Chinese AI development also needs to pause.
I think we have some counter examples. Bio and chemical weapons conventions. The world banned CFCs. Human cloning. Obviously there are some disanalogies, but it’s wrong to say the march never paused.
English

@yatharthmaan On 1, the issue is that our current trajectory also involves the AI killing everybody, not exactly making things better. On 2, that’s why we’ll eventually need a verifiable international agreement in which we make sure everybody is following the rules. arxiv.org/abs/2511.10783
English

@aaronscher Yes. AI will make things better for everyone. And China won't listen to you guys.
English

@HowardVega14 Yes! They are part of “every CEO” and a pause would not last more than a few months (to a year or two) without Chinese participation.
English

@justjayvi I said it in the post. If you’re like “what would pausing entail” then my tentative answer is primarily compute caps but also probably restrictions on new algorithms.
English

@aaronscher Can someone tell me actually what are they protesting??
They don't want AI at all? They want us to freeze on 4o? Don't boil the oceans? No matrix multiplications?
genuinely asking for a definition, not a troll post
English

@DirkBruere Yep! “Every CEO” includes the top Chinese companies too!
English

@HKallioGoblin I agree that’s a concern! It’s why we will eventually need verifiable international agreements in which we check that everybody is following the rules. Figuring out the details to those plans is what I do in my day job! arxiv.org/abs/2511.10783
English

@aaronscher China wont do that. To make this safe its good to remember rule, either you do it yourself, or your opponent does it. Better that whole world develops AI, and BCI.
English

@techpupparent @MIRIBerkeley @LisaThiergart I agree that compute tracking is very important! That’s the first big category of mechanism in the report!
English

@MIRIBerkeley @aaronscher @LisaThiergart Verification without compute tracking is theater. Hardware supply chains remain the only real leverage point.
English

