Tryfecta⏸️

2.7K posts

Tryfecta⏸️ banner
Tryfecta⏸️

Tryfecta⏸️

@tryfectaa

Katılım Ekim 2023
916 Takip Edilen85 Takipçiler
Tryfecta⏸️
Tryfecta⏸️@tryfectaa·
@FournesMaxime @deanwball @PauseAI If I am being gracious, what's driving Dean is actually a subconscious projection of his own terrible sense of shame and regret for his hand in AI policy and discussion that brought us to this point.
English
1
0
0
10
Maxime Fournes⏸️
Maxime Fournes⏸️@FournesMaxime·
This framing is obviously disingenuous, but sure! I'll play along and answer directly... Points 2 through 4 rest on the same wrong assumption: that a pause means controlling people. It means controlling compute. Training a frontier model requires data centers that use as much power as a small city, chips from a handful of manufacturers (TSMC, ASML), and months of continuous runs. This is infrastructure visible from space, far easier to track than fissile material. We built a verification regime for nuclear material 70 years ago. Nobody revoked physicists' passports to do it. On metrics (point 1): graduated capability thresholds, mandatory safety evaluations, an international technical body to oversee it. On scope (point 5): yes, it needs to include the 5-10 countries with real compute capacity. Both of these are hard. So was the IAEA. So was the Montreal Protocol. Hard is not a reason to do nothing while the race continues. Now on the framing itself. These questions assume that if I can't hand @deanwball a finished treaty, the case for a pause collapses. But that gets the order of operations entirely backwards. The Montreal Protocol was not designed by the people who first raised the alarm about CFCs. The IAEA was not drawn up by anti-nuclear campaigners. What happened every time is that public concern built political will, then governments mandated their best people to design the technical solutions. If we put a serious team on this, a handful of top scientists with real funding, a DARPA-style mandate, they could design a workable compute governance regime. The supply chain is concentrated, the infrastructure is massive and energy-intensive, the tracking problem is tractable. Do not pretend that this is some unsolvable mystery. The only real problem is that the political will is still insufficient (although growing fast) Demanding a full implementation blueprint from an advocacy movement before engaging with the substance is a tactic we've seen many times. The actual question Dean should be engaging with is simpler and harder: multiple actors are racing to build something that is likely to end civilization as we know it, and none of them can stop alone. What is his plan for that?
Dean W. Ball@deanwball

Here are some questions I wish "Pause" and "Stop" advocates would address: 1. Assuming we achieve the desired policy goal through a bilateral US/China agreement, what would be the specific metric or objective we would say needs to be satisfied in advance? Who decides whether we have satisfied them? What if one one party believes we have satisfied them but the other does not? 2. If the goal is achieved through a bilateral US/China agreement, would we need capital controls to ensure that U.S. investors cannot fund semiconductor fabs, data centers, or AI research labs in countries other than the U.S. and China? 3. Would we need to revoke the passports of U.S.-based AI researchers and semiconductor engineers to prevent them leaving America to join AI-related ventures elsewhere? How else would the U.S. and China keep researchers within their borders? 4. How should we grapple with the fact that (2) and (3) are common features of autocratic regimes? 5. Do the above questions mean that this really should be a global agreement, signed by all countries on Earth, or at least those with the theoretical ability to host large-scale data centers (probably Vanuatu doesn't need to be on board)?

English
8
5
51
6.5K
Tryfecta⏸️ retweetledi
Champagne Joshi
Champagne Joshi@JoshWalkos·
This push to achieve “AGI” will go down in history as one of the most unhinged, irresponsible things ever perpetrated on humanity by a very small group of megalomaniac humans.
English
240
434
2.2K
87.9K
Tryfecta⏸️ retweetledi
Acyn
Acyn@Acyn·
Sanders on AI: We need to develop a sense of urgency of here. The economic impacts are going to be enormous. The impacts on our children will be enormous, and again, there is literally an existential threat to the existence of the human race.
English
100
593
2.9K
194.7K
Tryfecta⏸️ retweetledi
Sen. Bernie Sanders
Sen. Bernie Sanders@SenSanders·
Call me a radical, but NO. We should not be replacing teachers in America with robots. We should attract the best and brightest in our country to become teachers and pay them the decent wages that they deserve.
Headquarters@HQNewsNow

Melania: The future of AI is personified. It will be formed in the shape of humans. Very soon, artificial intelligence will move from our mobile phones to humanoids that deliver utility. They fit well. Imagine a humanoid educator named Plato

English
1.1K
1.6K
9.5K
255.1K
Tryfecta⏸️ retweetledi
Vivid.🇮🇱
Vivid.🇮🇱@VividProwess·
Remember the Israeli 🇮🇱 guy with the beer and gun who went viral? He’s back, and he has a powerful message: “Don’t let your spirit fall. Am Yisrael Chai. This is our home, and we will build it stronger and better. We are a people who improve and grow.”
English
213
982
7.7K
171.3K
Tryfecta⏸️
Tryfecta⏸️@tryfectaa·
@Noahpinion We don't have an example of a species that attained immortality. We have plenty that went extinct, though.
English
0
0
0
66
Tryfecta⏸️
Tryfecta⏸️@tryfectaa·
@ramez @gmiller In the absence of evidence? What evidence would suffice? Do we all need to be actually dead? Because that wouldn't be helpful.
English
0
0
2
40
Ramez Naam
Ramez Naam@ramez·
I think there's strong selection effect here. In the absence of evidence, I don't believe those CEOs. Particularly when they also say extremely silly things about other sectors. (E.g., Dario saying AI will double human lifespan in the next 10 years, or Demis saying we'll cure all disease.) In short, I put little weight on their statements compared to evidence. And I haven't seen any compelling evidence. Prudent regulation to me looks like efforts at the ecosystem level to strengthen resilience. Across areas where we think AI might theoretically cause great harm (cyber, bio-weapons, etc..), it looks like: - Strengthening monitoring. - Anticipating attack approaches. - Scanning for vulnerabilities. - Patching security holes. - Building new defenses. - Building and stockpiling countermeasures. - Increasing funding for all of the above, in both private and public sector.
English
6
0
4
499
Ramez Naam
Ramez Naam@ramez·
Agree. Strong government controls over AI should concern us more than market competition between AI companies. Even as we acknowledge that market competition between AI companies brings its own risks.
Dean W. Ball@deanwball

Pause AI rhetoric is predicated on the notion that the AI companies are recklessly racing toward dangerous tech and that a government controlled pause button is therefore necessary, but this seems really hard to reconcile with the fact that government is attempting to destroy an AI company because *the government* is racing toward plausibly dangerous AI uses (Sec. Hegseth has stated in official directives that he wants to deploy AI into critical systems regardless of whether it is aligned, for example) and *the company* is pushing back. The roles are totally reversed from the logic that Pause AI and frankly other AI safety advocates confidently assumed for years. It is *industry* that is in favor of alignment and at least somewhat measured deployment risks, and government whose actions seem much closer to reckless. I predicted this for years. I said, in particular, that pauses and bans and licensing regimes gave government a dangerously high degree of control over AI, and that the incentives of government are much more dangerous than those of private industry with competitive market incentives. I believe the events of the last month are good evidence in favor of my view. At this point if you are an AI safety advocate whose policy proposals do not wrestle seriously with the brutal political economic reality of the state and AI, I don’t take you seriously. It gives me no pleasure to have been right about this, by the way. The state has an incredibly strong structural incentive to centralize power using AI, and we are, all of us, not so empowered to stop it. I am quite concerned about this.

English
7
3
24
23K
Tryfecta⏸️ retweetledi
Will Fithian
Will Fithian@wfithian·
Dean has it backwards here. The more worried we are that current govts will misuse powerful AI for authoritarian surveillance and control, the more we should want to prevent or defer its development. It'll be harder to stop govts from using it after it's built and deployed.
Dean W. Ball@deanwball

Pause AI rhetoric is predicated on the notion that the AI companies are recklessly racing toward dangerous tech and that a government controlled pause button is therefore necessary, but this seems really hard to reconcile with the fact that government is attempting to destroy an AI company because *the government* is racing toward plausibly dangerous AI uses (Sec. Hegseth has stated in official directives that he wants to deploy AI into critical systems regardless of whether it is aligned, for example) and *the company* is pushing back. The roles are totally reversed from the logic that Pause AI and frankly other AI safety advocates confidently assumed for years. It is *industry* that is in favor of alignment and at least somewhat measured deployment risks, and government whose actions seem much closer to reckless. I predicted this for years. I said, in particular, that pauses and bans and licensing regimes gave government a dangerously high degree of control over AI, and that the incentives of government are much more dangerous than those of private industry with competitive market incentives. I believe the events of the last month are good evidence in favor of my view. At this point if you are an AI safety advocate whose policy proposals do not wrestle seriously with the brutal political economic reality of the state and AI, I don’t take you seriously. It gives me no pleasure to have been right about this, by the way. The state has an incredibly strong structural incentive to centralize power using AI, and we are, all of us, not so empowered to stop it. I am quite concerned about this.

English
4
7
67
3K
Tryfecta⏸️ retweetledi
Maxime Fournes⏸️
Maxime Fournes⏸️@FournesMaxime·
Dean is misrepresenting our position. We are not asking for a "government controlled pause button." We never have. Anyone who's read our proposal knows this. We are asking for an international governing body with democratic oversight. Precisely because we agree that no single actor, government or company, should be in charge of advanced AI. So yes, the US government is being reckless. We have been saying this. This is exactly why we advocate for an international agency, not national control. Dean is arguing against a position we don't hold. On industry self-regulation: I spent 12 years there, my last role leading a research team building language and vision models. Competitive pressure in this industry pushes towards speed, not safety. Always has. The people running these labs confirm this openly. Altman, Amodei, Hassabis have all said they feel trapped in a race they can't exit alone. Amodei puts the probability of extinction at 10-25% and keeps building. If that's Dean's idea of a functioning market, we have very different definitions. The reality Dean refuses to engage with is that anyone building unaligned superintelligence, whether a company or a government, is creating a catastrophic risk for everyone. And by catastrophic, I do not mean "poses interesting governance challenges.", I mean likely game over for civilization, and we should talk about it like it is. Of course it is standard tactic from opponents to deform our message and then argue against the deformed version. Standard lobbying playbook. But I'll ask in good faith: @deanwball , what is your alternative? What is your plan that gives us more than a 10% chance of avoiding civilizational catastrophe, whether from loss of control, extinction, or totalitarian capture? Because as far as I can tell, an internationally enforced pause is the only proposal on the table with any chance of working. I am genuinely open to hearing a better one.
Dean W. Ball@deanwball

Pause AI rhetoric is predicated on the notion that the AI companies are recklessly racing toward dangerous tech and that a government controlled pause button is therefore necessary, but this seems really hard to reconcile with the fact that government is attempting to destroy an AI company because *the government* is racing toward plausibly dangerous AI uses (Sec. Hegseth has stated in official directives that he wants to deploy AI into critical systems regardless of whether it is aligned, for example) and *the company* is pushing back. The roles are totally reversed from the logic that Pause AI and frankly other AI safety advocates confidently assumed for years. It is *industry* that is in favor of alignment and at least somewhat measured deployment risks, and government whose actions seem much closer to reckless. I predicted this for years. I said, in particular, that pauses and bans and licensing regimes gave government a dangerously high degree of control over AI, and that the incentives of government are much more dangerous than those of private industry with competitive market incentives. I believe the events of the last month are good evidence in favor of my view. At this point if you are an AI safety advocate whose policy proposals do not wrestle seriously with the brutal political economic reality of the state and AI, I don’t take you seriously. It gives me no pleasure to have been right about this, by the way. The state has an incredibly strong structural incentive to centralize power using AI, and we are, all of us, not so empowered to stop it. I am quite concerned about this.

English
10
10
59
6.8K
Tryfecta⏸️
Tryfecta⏸️@tryfectaa·
@Noahpinion Will the world be better if more people blindly accepted AI to do whatever and then unwittingly become useless and possibly dead?
English
0
0
1
11
Tryfecta⏸️
Tryfecta⏸️@tryfectaa·
@Noahpinion Can you please explain why they need to change their messaging? Would it be better if they lied? You keep saying this same thing over and over again but I don't understand your motives here?
English
1
0
1
63
Perry E. Metzger
Perry E. Metzger@perrymetzger·
I’m your political opponent. Of course I am going to pay attention to things you post. What did you expect? You were the one who started posting on my timeline a while ago, probably for the same reason. Not sure why the fact that I’ve written a book on computer architecture for kids is supposed to be embarrassing? Care to explain why though?
English
1
1
14
784
Joe Allen
Joe Allen@JOEBOTxyz·
I have two annoying stalkers. One is the slanderous pinko Emile Torres @xriskology. The other is the lame kiddie book author Perry Metzger @perrymetzger. They share the same tactics: inventing my motives, passive-aggressive sniping, no sense of humor. Are they the same person?!
Joe Allen tweet mediaJoe Allen tweet media
English
10
4
35
2.3K
Tryfecta⏸️
Tryfecta⏸️@tryfectaa·
@perrymetzger You're the joke. It's you. You are a joke. Your work is also a joke. You're welcome for the simple explanation.
English
0
0
0
20
Perry E. Metzger
Perry E. Metzger@perrymetzger·
Maybe someone can help me understand why Joe thinks pictures of me are an embarrassing thing for him to be posting? I admit to being a very thick person, but I think he’s trying to insult me, and I’m not exactly sure how he thinks that works here? Like exactly how is this an insult? (I get that he thinks calling me gay is an insult.) The other day, he posted a cover picture of the kids’ book on computer architecture I wrote, and it seemed clear that he thought that this was somehow an embarrassment to me. Anyone have any theories? (Is it possible that something is just wrong with Joe? That he doesn’t actually understand how to insult someone? I find the whole thing really puzzling.) (For those that don’t know, Joe is being paid by the Effective Altruism cult to try to make it look like conservatives hate AI. He seems to dislike people who point this out. And I could get him just trying to be enraged with me or what have you, but his behavior is really really bizarre.)
English
6
2
26
2.7K
Tryfecta⏸️
Tryfecta⏸️@tryfectaa·
@ramez @Noahpinion What I want is to prevent the creation of AI capabilities that governments *will use* (you seem to think that we will be able to live in a world with powerful AI the government is somehow not allowed to use- this is imbecilic)
English
0
0
0
13
Ramez Naam
Ramez Naam@ramez·
Government, not nation. Good reason to be skeptical of our current government, for example. Other worse governments may yet come. What I want, even in healthy times, is some sort of balance of powers between them, tbh. Government should be able to enact reasonable regulations. Not outright bans, pauses, violations of 1A, deep surveillance, mandates to private companies to build AI capabilities government wants to use, etc.. .
English
2
0
8
340
Noah Smith 🐇🇺🇸🇺🇦🇹🇼
I can't agree with this libertarian view. Every powerful technology in history has eventually needed to be controlled by the government in some way. Unfettered market competition would be catastrophic for, say, nuclear weapons or virology. AI is the same.
Ramez Naam@ramez

Agree. Strong government controls over AI should concern us more than market competition between AI companies. Even as we acknowledge that market competition between AI companies brings its own risks.

English
29
23
253
20.9K
Tryfecta⏸️
Tryfecta⏸️@tryfectaa·
@ramez @rjohnston4 You are arguing with a strawman. The pause AI position is not to "regulate the software." The position is to secure a treaty that regulates how hardware is used, because trying to train massive models is about much more than having the right software & data.
English
0
0
0
22
Ramez Naam
Ramez Naam@ramez·
1. There's no credible report that DOD was attempting to make autonomous weapons for use domestically. 1.5 We already have autonomous or semi-autonomous weapons on the battlefield already. Lots of missiles use visual identification of targets. 2. Trying to regulate the software that does that is a fool's errand. If you have enough compute on the device, it's just going to be doable to build that code for a huge number of actors. No software regulation in the world will stifle that. It's too easy a problem. 2.5 We already regulate the weapons themselves (explosives, firearms). You can argue regulation there is insufficient, but it's a much much much easier problem than trying to make it impossible to write software that's not going to be that complicated in the grand scheme of things.
English
2
0
0
55
Tryfecta⏸️
Tryfecta⏸️@tryfectaa·
@ramez @rjohnston4 Do you not think an LLM will be capable of training such a model? Also you are incorrect. There is more to striking than "get the bomb to the target". There's analysis and targeting.
English
0
0
0
24
Ramez Naam
Ramez Naam@ramez·
@rjohnston4 1. That's incorrect. 2. Autonomous weapons will have little to do with LLMs or "superintelligence". They're going to be driven by small, fast models that are quite different. More about vision and physical space than language or self-improvement.
English
2
0
0
59
Tryfecta⏸️
Tryfecta⏸️@tryfectaa·
@PDoomOrder1 GR wasn't confirmed by experiment until 4 years after it was proposed and wasn't even widely accepted then. Put that in your pipe and smoke it, Dean.
English
0
0
1
122
Unrealrealist⏸️
Unrealrealist⏸️@PDoomOrder1·
I want to respond directly to this article because I think it gets the central issue wrong in a very specific way. My disagreement is not with the claim that intelligence is not omniscience or omnipotence. Of course it is not. My disagreement is with the way Dean repeatedly treats the failure of omnipotence as reassurance. He argues against a picture of AI risk that is stronger, stranger, and less necessary than the one he actually needs to answer, and then writes as if defeating that stronger picture settles the real dispute. I do not think it does. In fact, I think that move is what allows him to sound so confident that machine takeover is overwhelmingly unlikely. The confidence does not come from grappling with the strongest forms of the concern. It comes from misunderstanding the mechanisms by which advanced AI could become dangerous, toppling a strawman, and then treating the collapse of the strawman as though it were a refutation of the underlying issue. The real issue is not whether AI becomes God. It is whether it becomes sufficiently more capable than humans, in enough strategically important domains, to become catastrophically dangerous. There is an enormous space between human-level intelligence and omnipotence, and that is where most serious concern lives. To dismiss the doomers, you need something much stronger than the observation that AI will not be magical and that experiments take time. That does not touch the actual claim. A system does not need to be all-powerful to become uncontrollable. It only needs a large enough edge, exercised through the right channels, for long enough. Part of the problem is that the doomer position is not a monolith. There is no single package of assumptions that rises or falls together. Different people emphasize different mechanisms, different timelines, different thresholds, and different end states. Some focus on extinction. Some focus on permanent loss of control. Some focus on lock-in under machine-mediated institutions. Some focus on radical human disempowerment. These are not interchangeable, and refuting one caricature does not refute the whole landscape. Even if Dean had shown that some people overstate what superintelligence could infer or do, that still would not take down Yudkowsky’s view in particular, much less the broader class of arguments about catastrophic AI risk. I also do not think Dean engages Yudkowsky fairly. He seems to read him as though he were committed to a guaranteed and highly specific path by which AI inevitably conquers the world, or as though the entire argument depends on AI possessing something close to omnipotence. But that is not the heart of the position. Yudkowsky’s point is not that there is one scripted route that superintelligence must follow. It is that we should not be surprised if a system operating beyond our cognitive range finds routes to dominance that we do not foresee. That could involve cyber operations, persuasion, automation, institutional capture, scientific acceleration, robotics, novel engineering, or combinations of capabilities we do not currently know how to model well. Had Dean read Yudkowsky more carefully, he would have seen that the issue is not a guaranteed pathway. It is almost the reverse. The issue is that we should expect to be surprised by pathways that occur to a superintelligence and not to us. That is one of the main difficulties here. A superintelligence is hard to think about defensively precisely because, by definition, it may have access to strategies, abstractions, tools, and routes through the problem space that are beyond our understanding. The challenge is not merely that it might know more facts than we do. It is that its best ideas may be ideas we cannot generate, cannot anticipate, and may not even fully understand after the fact. That asymmetry matters enormously. We are not trying to stop a very smart opponent with known capabilities. We are trying to reason about a system whose most dangerous capabilities may consist in finding moves we literally do not know how to think of in advance. This is one reason the repeated example about inferring relativity from a few frames of a falling apple does not do the work Dean wants it to do. At most, it shows that one especially strong formulation is implausible. But the AI risk case does not depend on whether a superintelligence could infer general relativity from a tiny visual sample. Even if that exact claim were false, the central concern remains intact. AI does not need to reconstruct all of modern physics from a handful of observations in order to become highly dangerous. So as a rebuttal, the example is mostly beside the point. But I also think Dean is too dismissive of the broader point behind examples like that. High intelligence can take you surprisingly far in science and engineering. A great deal of progress does not come from mindlessly accumulating data. It comes from finding the right abstraction, the right symmetry, the right invariance, the right formalism, the right thought experiment, the right question, or the right variable to treat as load-bearing. That is especially obvious in physics. Anyone who has spent time with Landau and Lifshitz knows how much theoretical structure can be extracted from a relatively compact set of principles once the right mathematical framework is in hand. A large share of physics is not brute-force induction from giant piles of observations. It is seeing the structure. That is also why I think Dean is too anthropocentric about what counts as a simple theory. Newtonian mechanics is simpler for us in the sense that the mathematics is easier to learn and use. But in terms of background assumptions it is not obviously the simpler worldview. Newton gives you absolute space, absolute time, and action at a distance. General relativity gives you a more unified and, in an important sense, more principled picture. A sufficiently powerful intelligence might find the more invariant and conceptually economical theory more natural, and then recover Newtonian mechanics as a limiting case. Human scientific history should not be treated as though it maps the uniquely natural order in which intelligence must discover the world. It maps the order that happened to be accessible to minds like ours, with the tools we had, under the path-dependent conditions we inherited. Einstein is actually a good illustration of the broader point. Some of his deepest advances were not the product of gigantic new datasets but of thought experiments, conceptual clarity, and an unusual ability to notice which principles were doing the real work. Imagining what it would be like to ride alongside a beam of light, or reasoning through the equivalence of gravitational and inertial effects, was not a substitute for contact with reality. It was a way of extracting much more from the reality already available by using better concepts. That is exactly the kind of thing Dean’s framing understates. Scientific progress is not simply a matter of waiting for the world to reveal itself one experiment at a time. It is also a matter of having minds capable of seeing what the evidence already constrains. The same point appears in more ordinary scientific cases. Take the origin of the Moon. For a long time, scientists lacked the kind of direct observational access one would ideally want. Yet they could still make real progress because some facts were highly diagnostic. The Moon’s density being close to the density of Earth’s outer layers is not just another datum in a pile. It sharply constrains the explanation space. That was not mere brute-force experimentation rather It was recognizing which observation has unusually high evidential leverage. Science is full of cases where the decisive step is not more data in the abstract but identifying which feature of the available data matters most. This is why I think Dean badly understates how much intelligence matters in science and engineering. He says that better models of the world do not usually come from thinking about the problem really hard, but instead mainly from testing ideas in the real world. That is much too crude. Of course experiments matter. Of course reality gets a vote. But a huge amount of scientific progress consists in identifying the right experiment, choosing the right framing, seeing which fact is actually diagnostic, constructing the right mathematical representation, and drawing the right inference from the outcome. Often the hard part is not physically running the test. It is knowing what test to run. Experimental design is itself an intellectual achievement. The setup is not some fixed, mindless bottleneck. Better reasoning, better planning, and better robotic dexterity could all change how quickly and effectively experiments are carried out. Two agents can run the same experiment and learn very different amounts from it depending on how well they understand what they are seeing. Even if each individual experiment has an irreducible duration, intelligence still helps determine how many experiments are needed, how well they are chosen, how many can be run in parallel, how informative they are, and how much you extract from the results. Nature sets the duration of each trial. Intelligence helps determine how many trials you need, how well they are sequenced, and what you learn from each one. This is why the repeated claim that experiments take time does not come close to answering the concern. At most, it suggests that the world imposes friction. Fine. But friction is not safety. Saying that dangerous experimentation would take time is not an argument that catastrophe is unlikely. It is, at best, an argument that some dangerous trajectories would unfold over months or years rather than hours or days. Then you still need to show that this extra time would save us. You need to show that human institutions would recognize the threat, coordinate effectively, and act successfully within that window. Dean does not show that. He gestures at delay as though delay were equivalent to safety, but those are entirely different claims. More importantly, I do not think the right reply is that machine takeover would “not be effortless.” It might be effortless in some domains. Or rather: even where there is no straight line from capability to dominance, intelligence may still help you identify the geodesic through a constrained landscape. The world may impose bottlenecks, detours, and local obstacles. That does not mean a sufficiently capable system wanders blindly through them. Greater intelligence may consist, in part, in seeing the shortest feasible path through a space of real constraints, finding the route humans miss, and exploiting it before we have even conceptualized it. The absence of a straight line is not much comfort if the system is unusually good at finding the curve. Even if fabrication, industrial buildout, and iterative experimentation slow a system down, the conclusion is not therefore safe. The conclusion is only that the timescale may be extended in some cases. Whether that extension matters depends on whether humans can use it to solve alignment, coordinate politically, and resist displacement before the system becomes too deeply embedded or too capable to control. That is the real question. It is much harder than pointing out that labs and factories cannot be conjured instantly. The same overreach appears in Dean’s appeal to computational irreducibility. Even if some processes are irreducible in a strong sense, that does not imply that humans are anywhere close to optimal inferers. Kolmogorov complexity can tell you that there are lower bounds. It cannot tell you that our species is near those bounds. It certainly cannot tell you that the historical development of science is anything like the shortest path through idea space. There is a huge difference between saying reality cannot be compressed arbitrarily and saying humans have already explored it in something close to the most efficient possible way. Dean repeatedly slides from the first claim to rhetoric that only makes sense if the second were true. There seems to be a picture in the background of his argument in which humanity had to run something close to the right set of experiments, in something like the right order, to get where we are. I do not think that picture is credible. A huge number of experiments happen because people are confused, because they lack the right abstraction, because they fail to notice what is really load-bearing, or because institutions and personalities channel inquiry inefficiently. Much of scientific history reflects human limitation, path dependence, and clumsiness. Intelligence does not abolish experimentation, but it can radically change how much experimentation is needed and how efficiently the search through hypothesis space is conducted. This matters because Dean keeps making the same move. He points out that the world has bottlenecks, that not all knowledge is online, that tacit knowledge exists, that experiments take time, that institutions resist, and then he leans toward the conclusion that machine takeover is overwhelmingly unlikely. But none of that follows. At most, he has established that the world pushes back. Fine. But reality pushing back is not a safety guarantee. The real question is whether a much smarter, faster, more scalable, more persistent system could overcome enough of those bottlenecks in enough strategically important areas to become catastrophically dangerous. That is a far lower bar than omnipotence. Humans themselves are the obvious example. We did not need omnipotence to dominate the planet. We did not need perfect foresight, perfect knowledge, or infinite power. We needed an edge in reasoning, coordination, tool use, and cumulative learning. That was enough to reshape ecosystems, subordinate other species, wipe out species, and become the dominant strategic force on Earth. So when Dean says, in effect, that AI will still face resource constraints, physical bottlenecks, hidden knowledge, institutional friction, and uncertainty, my reaction is simply: yes, and so did humans. That does not remotely imply safety. It only means that the threat would be mediated through the world rather than operating outside it. If you imagine a dodo trying to reason about whether humans posed an existential threat, it could easily have comforted itself with arguments structurally similar to Dean’s. Humans are not omnipotent. They cannot instantly reach every island. They face logistical bottlenecks. Building ships takes time. They do not understand everything. Their institutions are messy. They make mistakes. Experiments fail. None of those observations would have saved the dodo. They would only have described real frictions in the mechanism of extinction. Dean is making too strong a claim when he treats the existence of such frictions as evidence that machine takeover is overwhelmingly unlikely. Frictions are compatible with disaster. Often they merely describe the route by which disaster arrives. I would not try to rank extinction against disempowerment with spurious precision, and I do not think the exact taxonomy matters very much here. The point is simply that both extinction and durable loss of human control are live concerns. They may be distinct risks, they may come apart, or one may lead into the other; that is not really the issue. The issue is that Dean has not shown that advanced AI is unlikely to produce either. Showing that AI is not omnipotent, or that experimentation takes time, does not resolve the possibility of extinction, and it does not resolve the possibility that humans remain alive while losing meaningful control over civilization’s future. Once you think in those terms, Dean’s framing starts to look too narrow. His picture is too much “computer versus man,” as though the only serious concern is a clean confrontation in which a machine mind visibly overpowers humanity. That is one imaginable path, and it should not be dismissed just because it sounds dramatic. But it is hardly the only one, and probably not the only one worth worrying about. Perhaps a more legible concern to Dean is computer systems deeply embedded in an increasingly difficult-to-understand world, with humans trying to retain control from inside that world. The danger is not just a single model in a data center trying to outthink us from a distance. It is the gradual fusion of machine cognition with the infrastructure that governs finance, logistics, military systems, research, communications, administration, manufacturing, robotics, and resource allocation. In that world, the issue is not a dramatic showdown. It is that the environment itself becomes more opaque, more machine-legible, more optimized for nonhuman processes, and less governable from a human point of view. That concern becomes even more vivid in the kind of world Elon Musk sometimes gestures toward, where there may eventually be as many robots as humans, or something in that vicinity. In that world, the relevant unit of analysis is not a chatbot. It is a civilization saturated with AI-linked robots, sensors, factories, labs, vehicles, supply chains, bureaucratic systems, and possibly weapons systems. At that point the comparison is no longer simply between a computer and a person. It is between human agency and a densely networked techno-industrial order increasingly optimized, interpreted, and coordinated by systems we do not fully understand. That is the kind of picture serious AI risk arguments are trying to get people to look at. And it is far more unsettling than the cartoon Dean keeps arguing against. The article’s treatment of sample efficiency, tacit knowledge, and distributed expertise also feels much too comforting. Yes, humans are remarkably sample-efficient in many contexts. Yes, firms like TSMC contain hard-to-formalize know-how. Yes, important knowledge is distributed across people and institutions. But none of that is a knockdown argument. Lower sample efficiency is only reassuring if it imposes a real ceiling on capability. Otherwise it just means machines compensate with scale, speed, and parallelism until the inefficiency stops mattering much. If a system can already become highly capable while being less sample-efficient than humans, then it is already competitive on brute force alone. If it later becomes more sample-efficient as well, the advantage compounds. Likewise, the fact that not all knowledge is online does not show safety. The relevant question is not whether every relevant fact is public. It is how much capability can be assembled from public information, inference, targeted experimentation, and active acquisition of what remains. There is already an extraordinary amount in the open on semiconductors, lithography, materials science, metrology, control systems, robotics, cyber operations, and related domains. Missing pieces can be inferred, elicited, bought, stolen, experimentally reconstructed, or bypassed. Public information is only one path. There is also a deeper problem with the way Dean talks about distributed knowledge. When knowledge is spread across a large human organization, that is not just a moat against outsiders. It is also evidence of human cognitive limitation. No one inside such a system can read everything, retain everything, integrate everything, or keep the whole relevant design space in mind at once. A system that can absorb much more of the public record, retain it perfectly, reason across it continuously, and identify neglected connections may gain a substantial advantage even before it accesses genuinely secret information. So “not all the knowledge is online” is nowhere near the reassurance Dean seems to think it is. All of this comes together in what I take to be the central mistake of the piece. Dean wants to conclude that even a misaligned AI system with no AI-specific safeguards would still fail to eradicate or enslave humanity because there are too many steps, too much hidden knowledge, too much complexity, too much capital required, too much institutional resistance, and too much human oversight. But that simply does not follow. At most, he establishes that the world pushes back. Fine. But reality pushing back is not a safety guarantee. The question is whether those frictions are decisive, and he does not show that they are. To dismiss the doomers, you need something much stronger. You need to show not just that obstacles exist, but that they reliably prevent systems with large capability advantages from crossing the threshold into catastrophic strategic superiority. You need to show that delays introduced by experimentation and physical bottlenecks translate into actual human rescue rather than confusion, racing, institutional paralysis, or capture. You need to show that the mechanisms by which humans remain in control are robust even under severe asymmetries in cognition, speed, scale, persistence, and coordination. Dean does not do that. I find the alignment discussion similarly unpersuasive. He suggests that powerful AI does not need to be gotten right on the first try, as though we can just muddle through with incremental improvement. But why exactly should we believe that? Why assume we can afford repeated failures and still converge safely? Why assume increasingly capable systems will remain corrigible, interpretable, and governable long enough for that gradualist story to work? Why assume markets and institutions will move us toward safety rather than toward opacity, race pressures, lock-in, and the deployment of systems that nobody truly controls? Those are the crucial claims, and I do not think Dean demonstrates them. In some ways that is because the deeper disagreement sits upstream. If you already believe that even a misaligned superintelligence would still fail because the world is too complex, too institutionally thick, too bottlenecked, and too full of tacit knowledge, then of course alignment will not look especially urgent. But that only means the real dispute is earlier. I think that earlier judgment is badly mistaken. My basic problem with the piece, then, is that it keeps aiming at the wrong target. It argues against omnipotence when the real issue is strategic superiority. It treats the limits of intelligence as though they implied safety. It invokes experimental bottlenecks, tacit knowledge, sample inefficiency, distributed expertise, and computational irreducibility without showing that those remain decisive against a system that is much smarter, faster, more scalable, cheaper to copy, able to operate continuously, and increasingly embedded in the world. It takes the existence of frictions as though that were close to an argument that catastrophe is unlikely. It is not. Yes, intelligence is not omnipotence. Yes, reality pushes back. Yes, experiments take time. Yes, tacit knowledge is real. Yes, institutions are messy. None of that is enough. Humans did not need omnipotence to become an existential threat to other species. They only needed an edge. The real question is whether AI could acquire a sufficiently large edge over us, and then translate that edge through infrastructure, institutions, robotics, software, science, persuasion, and automation into a world where humans can no longer meaningfully direct events. That could end in extinction. It could end in radical disempowerment. It could end in some other durable loss of human control. The point is not to pretend we can rank those outcomes with confidence we do not have. The point is that dismissing them requires much more than Dean has provided. Showing that AI is not omnipotent is not enough. Showing that experiments take time is not enough. Showing that one strawman version of machine takeover is implausible is not enough. That is the real issue. And I do not think Dean has actually answered it.
Dean W. Ball@deanwball

I spent a weekend at Stanford recently, which is where, in 2023, I did much of my formative thinking on AI. The Anthropic-DoW affair tested that early intellectual foundation more than anything, so found myself walking around Stanford, reflecting on what I learned in 2023.

English
6
6
72
8.9K