
Gus Docker
148 posts

Gus Docker
@GusDocker
Podcast Host @FLI_org



If you'd shown Claude Code or Codex on 5.4xhigh to any reasonable person in 2020 they'd have concluded it was AGI


Smart people keep confidently declaring what AI can't do, and they keep being wrong. This NYT Op-Ed calls judgment "a uniquely human skill" that "cannot be automated." But AI can already do what he claims it can't. I've tested it. If it were just one Op-Ed, I wouldn’t care, but this pattern is everywhere: confident claims about what AI “can never” do, and half the time AI can already do it, or there's no reason it won't be able to soon. Why do these claims keep getting made? I see three reasons: 1. People aren't using frontier models. They see a weak output and blame "AI" when their model is just outdated. 2. People use AI in flawed ways (missing context, bad prompts), then attribute the flaw to AI itself. 3. People _badly_ want there to be some skill that AI can't match, and so they wishcast that into existence. I think the Op-Ed is wrong about AI's abilities. But it does prompt a good question: Where should humans stay responsible, even when AI judgment is good enough? I go deeper on all of this below.









Vague statements like this, which fundamentally cannot be operationalized in policy but feel nice to sign, are counterproductive and silly. Just as they were two or so years ago, when we went through another cycle of nebulous AI-statement-signing. Let’s set aside the total lack of a definition of “superintelligence.” I’ll even grant the statement drafters that we all arrive on a mutually agreeable definition. Then assume we write that definition into a law, which says “no superintelligence until it’s proven safe.” How do we enforce this law? Now comes the fine print—the stuff left unsaid in the statement, the stuff the statement drafters probably did not much discuss with the many signatories who lent their names and reputations to this endeavor. How do you prove superintelligence will be safe without building it? How do you prove a plane is flightworthy without flying it? You can’t. So, the logic would go, we will need a sanctioned venue and institution for superintelligence development, where we will experiment with the technology until it is “proven safe” (who decides this, by the way, and what happens after it is “proven safe”?) This institution would need to be funded somehow by all governments with similar prohibitions (which the statement drafters, though probably not all signatories, would likely argue needs to include every country on Earth, including US adversaries). A global governance body whose purpose is to build the thing the statement drafters have told us is so dangerous, partially because of the power it could confer on those who control it. A consortium of governments which, if successful, would exercise unilateral control over how to wield this technology—and against whom to wield it. The same people who uniquely possess militaries, police, and a monopoly on legitimate violence. The same people who possess, in other words and in the final analysis, the right to kill you or confiscate your property if you do not listen to them, newly empowered with the most powerful technology ever conceived. Does that sound “safe” to you? This sounds to me like the worst possible way to build “superintelligence.” I reject all efforts to centralize power in this way. And I reject blobby statements with no path to productive realization in policy.





📻 Economist @BasilHalperin on the latest FLI podcast episode: 📢 "It's hard to get away from the idea that there will be skyrocketing inequality in a truly transformative AI scenario, but skyrocketing inequality might still be consistent with everyone being better off." 🔗 Listen now in the replies for Basil & @GusDocker's discussion on what markets tell us about AI timelines:


📻 Economist @BasilHalperin on the latest FLI podcast episode: 📢 "It's hard to get away from the idea that there will be skyrocketing inequality in a truly transformative AI scenario, but skyrocketing inequality might still be consistent with everyone being better off." 🔗 Listen now in the replies for Basil & @GusDocker's discussion on what markets tell us about AI timelines:


