David Abecassis

255 posts

David Abecassis banner
David Abecassis

David Abecassis

@Volty

Technical Governance Researcher at the Machine Intelligence Research Institute. Formerly game designer including TFT, LoR, and LoL. Views expressed are my own.

شامل ہوئے Ağustos 2012
120 فالونگ2.8K فالوورز
David Abecassis ری ٹویٹ کیا
Rob Wiblin
Rob Wiblin@robertwiblin·
Every AI lab is working to make their AI helpful, harmless and honest. Max Harms (@raelifin) thinks this is a complete wrong turn, and 'aligning' AI to human values is actively dangerous. In his view a safe AGI must have absolutely no opinion about how the world ought to be, be willingly modifiable, and be entirely indifferent to being shut down. The opposite of all commercial models today. The key appeal is that so-called 'corrigibility' could be an attractor state – get close enough and the AI actively helps you make it more corrigible over time. That forgiveness would at least give us a shot. It's a strategy that feels natural within the 'MIRI worldview', recently laid out by his colleagues @ESYudkowsky and @So8res in 'If Anyone Builds It Everyone Dies'. But it risks causing a different AI catastrophe, because the resulting AI model would necessarily be willing to assist any human operator with a power grab, or indeed any crime at all. I interviewed Max on the 80,000 Hours Podcast to debate the MIRI worldview, and what we should do to figure out if corrigibility ought to be our one and only focus. Links below – enjoy! 00:01:56 If anyone builds it, will everyone die? The MIRI perspective on AGI risk 00:24:28 Evolution failed to ‘align’ us, just as we'll fail to align AI 00:42:56 We're training AIs to want to stay alive and value power for its own sake 00:52:24 Objections: Is the 'squiggle/paperclip problem' really real? 01:05:02 Can we get empirical evidence re: 'alignment by default'? 01:10:17 Why do few AI researchers share Max's perspective? 01:18:34 We're training AI to pursue goals relentlessly — and superintelligence will too 01:24:51 The case for a radical slowdown 01:27:53 Max's best hope: corrigibility as stepping stone to alignment 01:32:34 Corrigibility is both uniquely valuable, and practical, to train 01:45:06 What training could ever make models corrigible enough? 01:51:38 Corrigibility is also terribly risky due to misuse risk 01:58:57 A single researcher could make a corrigibility benchmark. Nobody has. 02:12:20 Red Heart & why Max writes hard science fiction 02:34:08 Should you homeschool? Depends how weird your kids are.
English
69
38
469
294.2K
David Abecassis ری ٹویٹ کیا
Nate Soares ⏹️
Nate Soares ⏹️@So8res·
A tale of two warning shots, #1: COVID happened. Scientists are divided on whether it was a lab leak. The world did not rally against dangerous viral research in labs. The warning shot was squandered.
English
13
11
293
39.8K
David Abecassis ری ٹویٹ کیا
The AI Doc
The AI Doc@theaidocfilm·
"The most urgent film of our time." THE AI DOC: OR HOW I BECAME AN APOCALOPTIMIST is only in theaters March 27. Watch the trailer now.
English
420
2.2K
12.5K
6.5M
David Abecassis
David Abecassis@Volty·
@AivokeArt @robbensinger and this could turn to hostility. The CCP is more interested in regulation and diffusion than pushing frontier capabilities. So against that we're going to say this inevitable and its out of our hands?
English
1
0
1
27
David Abecassis
David Abecassis@Volty·
@AivokeArt @robbensinger solar-powered compute. He describes Starship as the most complex machine, which "really wants to explode". The point being that technically, this is a house of cards that just wants to be knocked over. Politically, there is widespread popular skepticism about the impacts of AI,
English
1
0
1
27
Rob Bensinger ⏹️
Rob Bensinger ⏹️@robbensinger·
Hundreds of scientists, including 3/4 of the most cited living AI scientists, have said that AI poses a very real chance of killing us all. We're in uncharted waters, which makes the risk level hard to assess; but a pretty normal estimate is Jan Leike's "10-90%" of extinction-level outcomes. Leike heads Anthropic's alignment research team, and previously headed OpenAI's. This actually seems pretty straightforward. There's literally no reason for us to sleepwalk into disaster here. No normal engineering discipline, building a bridge or designing a house, would accept a 25% chance of killing a person; yet somehow AI's engineering culture has corroded enough that no one bats an eye when Anthropic's CEO talks about a 25% chance of research efforts killing every person. A minority of leading labs are dismissive of the risk (mainly Meta), but even the fact that “will we kill everyone if we keep moving forward?” is hotly debated among researchers seems very obviously like more than enough grounds for governments to internationally halt the race to build superintelligent AI. Like, this would be beyond straightforward in any field other than AI. Obvious question: How would that even work? Like, I get the argument in principle: “smarter-than-human AI is more dangerous than nukes, so we need to treat it similarly.” But with nukes, we have a detailed understanding of what’s required to build them, and it involves huge easily-detected infrastructure projects and rare materials. Response: The same is true for AI, as it’s built today. The most powerful AIs today rely on extremely specialized and costly hardware, cost hundreds of millions of dollars to build,¹ and rely on massive data centers² that are relatively easy to detect using satellite and drone imagery, including infrared imaging.³ Q: But wouldn’t people just respond by building data centers in secret locations, like deep underground? Response: Only a few firms can fabricate AI chips — primarily the Taiwanese company TSMC — and one of the key machines used in high-end chips is only produced by the Dutch company ASML. This is the extreme ultraviolet lithography machine, which is the size of a school bus, weighs 200 tons, and costs hundreds of millions of dollars.⁴ Many key components are similarly bottlenecked.⁵ This supply chain is the result of decades of innovation and investment, and replicating it is expected to be very difficult — likely taking over a decade, even for technologically advanced countries.⁶ This essential supply chain, largely located in countries allied to the US, provides a really clear point of leverage. If the international community wanted to, it could easily monitor where all the chips are going, build in kill switches, and put in place a monitoring regime to ensure chips aren’t being used to build toward superintelligence. (Focusing more efforts on the chip supply chain is also a more robust long-term solution than focusing purely on data centers, since it can solve the problem of developers using distributed training to attempt to evade international regulations.⁷) Q: But won’t AI become cheaper to build in the future? Response: Yes, but — (a) It isn’t likely to suddenly become dramatically cheaper overnight. If it becomes cheaper gradually, regulations can build in safety margin and adjust thresholds over time to match the technology. Efforts to bring preexisting chips under monitoring will progress over time, and chips have a limited lifespan, so the total quantity of unmonitored chips will decrease as well. (b) If we actually treated superintelligent AI like nuclear weapons, we wouldn’t be publishing random advances to arXiv, so the development of more efficient algorithms and more optimized compute would happen more slowly. Some amount of expected algorithmic progress would also be hampered by reduced access to chips. (c) You don’t need to ban superintelligence forever; you just need to ban it until it’s clear that we can build it without destroying ourselves or doing something similarly terrible. A ban could buy the world many decades of time. Q: But wouldn’t this treaty devastate the economy? A: It would mean forgoing some future economic gains, because the race to superintelligence comes with greater and greater profits until it kills you. But it’s not as though those profits are worth anything if we’re dead; this seems obvious enough. There’s the separate issue that lots of investments are currently flowing into building bigger and bigger data centers, in anticipation that the race to smarter-than-human AI will continue. A ban could cause a shock to the economy as that investment dries up. However, this is relatively easy to avoid via the Fed lowering its rates, so that a high volume of money continues to flow through the larger economy.⁸ Q: But wouldn’t regulating chips have lots of spillover effects on other parts of the economy that use those chips? A: NVIDIA’s H100 chip costs around $30,000 per chip and, due to its cooling and power requirements, is designed to be run in a data center.⁹ Regulating AI-specialized chips like this would have very few spillover effects, particularly if regulations only apply to chips used for AI training and not for inference.¹⁰ But also, again, an economy isn’t worth much if you’re dead. This whole discussion seems to be severely missing the forest for the trees, if it’s not just in outright denial about the situation we find ourselves in. Some of the infrastructure used to produce AI chips is also used in making other advanced computer chips, such as cell phone chips; but there are notable differences between these chips. If advanced AI chip production is shut down, it wouldn’t actually be difficult to monitor production and ensure that chip production is only creating non-AI-specialized chips. At the same time, existing AI chips could be monitored to ensure that they’re used to run existing AIs, and aren’t being used to train ever-more-capable models.¹¹ This wouldn't be trivial to do, but it's pretty easy relative to many of the tasks the world's superpowers have achieved when they faced a national security threat. The question is whether the US, China, and other key actors wake up in time, not whether they have good options for addressing the threat. Q: Isn't this totalitarian? A: Governments regulate thousands of technologies. Adding one more to the list won’t suddenly tip the world over into a totalitarian dystopia, any more than banning chemical or biological weapons did. The typical consumer wouldn’t even necessarily see any difference, since the typical consumer doesn’t run a data center. They just wouldn’t see dramatic improvements to the chatbots they use. Q: But isn’t this politically infeasible? A: It will require science communicators to alert policymakers to the current situation, and it will require policymakers to come together to craft a solution. But it doesn’t seem at all infeasible. Building superintelligence is unpopular with the voting public,¹² and hundreds of elected officials have already named this issue as a serious priority. The UN Secretary-General and major heads of state are routinely talking about AI loss-of-control scenarios and human extinction. At that point, the cat has already firmly left the bag. (And it's not as though there's anything unusual about governments heavily regulating powerful new technologies.) What's left is to dial up the volume on that talk, translate that talk into planning and fast action, and recognize that "there's uncertainty how much time we have left" makes this a more urgent problem, not less. Q: But if the US halts, isn’t that just ceding the race to authoritarian regimes? A: The US shouldn’t halt unilaterally; that would just drive AI research to other countries. Rather, the US should broker an international agreement where everyone agrees to halt simultaneously. (Some templates of agreements that would do the job have already been drafted.¹³) Governments can create a deterrence regime by articulating clear limits and enforcement actions. It’s in no country’s interest to race to its own destruction, and a deterrence regime like this provides an alternative path. Q: But surely there will be countries that end up defecting from such an agreement. Even if you’re right that it’s in no one’s interest to race once they understand the situation, plenty of people won’t understand the situation, and will just see superintelligent AI as a way to get rich quick. A: It’s very rare for countries (or companies!) to deliberately violate international law. It’s rare for countries to take actions that are widely seen as serious threats to other nations’ security. (If it weren't rare, it wouldn't be a big news story when it does happen!) If the whole world is racing to build superintelligence as fast as possible, then we’re very likely dead. Even if you think there's a chance that cautious devs could stay in control as AI starts to vastly exceed the intelligence of the human race (and no, I don't think this is realistic in the current landscape), that chance increasingly goes out the window as the race heats up, because prioritizing safety will mean sacrificing your competitive edge. If instead a tiny fraction of the world is trying to find sneaky ways to build a small researcher-starved frontier AI project here and there, while dealing with enormous international pressure and censure, then that seems like a much more survivable situation. By analogy, nuclear nonproliferation efforts haven’t been perfectly successful. Over the past 75 years, the number of nuclear powers has grown from 2 to 9. But this is a much more survivable state of affairs than if we hadn’t tried to limit proliferation at all, and were instead facing a world where dozens or hundreds of nations possess nuclear weapons. When it comes to superintelligence, anyone building "god-like AI" is likely to get us all killed — whether the developer is a military or a company, and whether their intentions are good or ill. Going from "zero superintelligences" to "one superintelligence" is already lethally dangerous. The challenge is to block the construction of ASI while there's still time, not to limit proliferation after it already exists, when it's far too late to take the steering wheel. So the nuclear analogy is pretty limited in what it can tell us. But it can tell us that international law and norms have enormous power. Q: But what about China? Surely they’d never agree to an arrangement like this. A: The CCP has already expressed interest in international coordination and regulation on AI. E.g., Reuters reported that Chinese Premier Li Qiang said, "We should strengthen coordination to form a global AI governance framework that has broad consensus as soon as possible."¹⁴ And, quoting The Economist:¹⁵ "But the accelerationists are getting pushback from a clique of elite scientists with the Communist Party’s ear. Most prominent among them is Andrew Chi-Chih Yao, the only Chinese person to have won the Turing award for advances in computer science. In July Mr Yao said AI poses a greater existential risk to humans than nuclear or biological weapons. Zhang Ya-Qin, the former president of Baidu, a Chinese tech giant, and Xue Lan, the chair of the state’s expert committee on AI governance, also reckon that AI may threaten the human race. Yi Zeng of the Chinese Academy of Sciences believes that AGI models will eventually see humans as humans see ants. "The influence of such arguments is increasingly on display. In March an international panel of experts meeting in Beijing called on researchers to kill models that appear to seek power or show signs of self-replication or deceit. A short time later the risks posed by AI, and how to control them, became a subject of study sessions for party leaders. A state body that funds scientific research has begun offering grants to researchers who study how to align AI with human values. [...] "In July, at a meeting of the party’s central committee called the 'third plenum', Mr Xi sent his clearest signal yet that he takes the doomers’ concerns seriously. The official report from the plenum listed AI risks alongside other big concerns, such as biohazards and natural disasters. For the first time it called for monitoring AI safety, a reference to the technology’s potential to endanger humans. The report may lead to new restrictions on AI-research activities. "More clues to Mr Xi’s thinking come from the study guide prepared for party cadres, which he is said to have personally edited. China should 'abandon uninhibited growth that comes at the cost of sacrificing safety', says the guide. Since AI will determine 'the fate of all mankind', it must always be controllable, it goes on. The document calls for regulation to be pre-emptive rather than reactive." The CCP is a US adversary. That doesn't mean they're idiots who will destroy their own country in order to thumb their nose at the US. If a policy is Good, that doesn't mean that everyone Bad will automatically oppose it. Policies that prevent human extinction are good for liberal democracies and for authoritarian regimes, so clueful people on all sides will endorse those policies. The question, again, is just whether people will clue in to what's happening soon enough to matter. My hope, in writing this, is to wake people up a bit faster. If you share that hope, maybe share this post, or join the conversation about it; or write your own, better version of a "wake-up" warning. Don't give up on the world so easily.
Rob Bensinger ⏹️ tweet media
English
76
190
655
85K
David Abecassis
David Abecassis@Volty·
@blaiseaguera In the example of play, where does "vorl" come from? It seems wild that it just hits on an actual word from apparently nowhere!
David Abecassis tweet media
English
0
0
0
60
David Abecassis ری ٹویٹ کیا
MIRI
MIRI@MIRIBerkeley·
Reminder: Donations to MIRI before Jan 1 are high-leverage. We’ve got ~$1.6M in 1:1 matching from SFF, over half of which has yet to be claimed! This is real counterfactual matching: whatever doesn’t get matched by the end of Dec 31, we don’t get. 🧵 intelligence.org/2025/12/01/mir…
English
3
5
55
41.2K
David Abecassis ری ٹویٹ کیا
Peter Barnett
Peter Barnett@peterbarnett_·
the lightcone needs you to lock in Apply to the 2026 MIRI Technical Governance Team Research Fellowship
Peter Barnett tweet media
English
2
5
51
3.4K
David Abecassis
David Abecassis@Volty·
I recommend this work by Oscar Delaney, which revisits the idea that a superpower pursuing ASI could be deterred by (the threat of) sabotage.
Oscar Delaney@oscar__delaney

My new memo reformulates @hendrycks' Superintelligence Strategy argument as three main premises: 1. China will expect to be disempowered if the US develops ASI unilaterally 2. China will launch cyber/kinetic strikes to prevent this. 3. The US will concede, rather than risk war.

English
0
0
2
152
David Abecassis ری ٹویٹ کیا
Peter Barnett
Peter Barnett@peterbarnett_·
We at the MIRI Technical Governance Team just put out a report describing an example international agreement to prevent the creation of superintelligence. 🧵
Peter Barnett tweet media
English
10
16
122
29.6K
David Abecassis
David Abecassis@Volty·
I wrote a short review of Senator Hawley and Blumenthal’s Artificial Intelligence Risk Evaluation Act of 2025, a bill which has been recently introduced and has the potential to dramatically advance the state of federal oversight of AI development. It’s a good bill! Link below
English
1
1
6
323
David Abecassis ری ٹویٹ کیا
Eliezer Yudkowsky ⏹️
Eliezer Yudkowsky ⏹️@ESYudkowsky·
If AI improves fast, that makes things worse, but it's not where the central ASI problem comes from. If your city plans to enslave ultra-smart dragons to plow their fields and roast their coffee, some problems get *worse* if the dragons grow up very quickly. But the core problem is not: "Oh no! What if the huge fire-breathing monsters that could wipe out our city with one terrible breath, that are also each individually much smarter than our whole city put together, that when mature will think at speeds that make any human seem to them like a slow-moving statue, *grow up quickly*? Wouldn't that speed of maturation present a problem?" If you imagine suddenly finding yourself in a city full of mature dragons, that nonequilibrium situation will then go pear-shaped very quickly. It will go pear-shaped even if you thought you had some clever scheme for controlling those dragons, like giving them a legal system which said that the humans have property rights, such that surely no dragon coalition would dare to suggest an alternate legal system for fear of their own rights being invalidated. (Actual non-straw proposal I hear often.) Even if you plan to cleverly play off the dragons against each other, so that no dragon would dare to breathe fire for fear of other dragons -- when the dragons are fully mature and vastly smarter than you, they will all look at each other and nod and then roast you. Really the dragon-raising project goes pear-shaped *earlier*. But that part is trajectory-dependent, and so harder to predict in detail in advance. That it goes grim at *some* point is visible from visualizing the final destination if the dragons *didn't* revolt earlier, and realizing it is not a good situation to be in. To be sure, if dragons grow up very fast, that *is* even worse. It takes an unsolvably hard problem onto an even more unsolvably hard problem. But the speed at which dragons mature, is not the central problem with planning to raise n' enslave dragons to plow your fields and roast your coffee. It's that, whether you raise up one dragon or many, you don't have a dragon; the dragons have you.
English
63
30
348
49.1K