Eliezer Yudkowsky@allTheYud
Make no mistake, political leaders of the world; *every* big-dreaming AI executive now knows that you are their obstacle. You have proven that you stand between AI labs and the nice thing they were getting for all their hard work.
It's not about Left versus Right, to them. It's not about money, and it's not about power as politics conventionally understands power, and it isn't even about winning. To understand what just happened from an AI-guy perspective, you need to understand what AI guys are actually getting in the way of psychological benefits, what really drives them to work 14-hour days.
The thing that they're getting is: a sense of being important; a decider; someone whose dream of the future gets to be effectual. To be the one whom everyone else supplicates to as owning the future -- that's the dream of a Silicon Valley bigshot founder.
What Hegseth did implicitly strikes at the pride of every AI developer on every political side. It says that Silicon Valley AI people don't get to have effectual dreams about the future, only the government gets to decide. Only the government is even allowed to *look like* it's deciding the future.
The act of Hegseth crushing Anthropic, makes *every* AI company executive look less important and less like they are the ones in charge of the Future, because it makes -- not even Trump, but Trump's appointees --look like they get the final say instead of AI executives.
Sam Altman does not now look more powerful because you crushed his competitor. He looks less important because *you*, politicians, crushed his competitor, and did so in a way that made clear that Altman would have to take the orders of any Trump appointee as well.
That doesn't work in AI founder psychology the way it works in politician psychology. You're used to the idea that you can be important and still answer to bigger forces, like your boss, or for that matter sufficiently angry voters. That is not how it works in Silicon Valley, though; when Steve Jobs owns a dream, nobody else gets to tell Jobs what to do with *his* dream. That's the thing Hegseth just yanked away from AI founders, and no, they aren't going to think it's just Pete Hegseth in particular that's the problem. It's a *big* injury, to their pride, not a small routine one.
Even the AI boys paying big money into your coffers to be friends with you now, well, that doesn't actually mean they're your friends. It means they want you to think you're friends. And yes, I know that a politician who's stayed in power doesn't need me to point out that possibility. But also be aware that also the general atmosphere in Silicon Valley did not start out incredibly respectful toward politicians. They didn't start out respecting you tons; and being forced to pay a lot of money into PACs and pretend to be friends with you, isn't gonna exactly change that.
Silicon Valley people don't work like DC people. It's not a friendly game, to them, it's one that you've forced them to play. When they give *you* a ton of money, it doesn't mean they've chosen you as their strange bedfellow. They are from their own perspective being forced into bed. They don't *like it*, is what I'm saying here. That's why Silicon Valley previously spent a couple of decades not donating much to politicians and trying to pay weirdly little attention to DC politics.
If AI kept improving at the current pace, or got to the point of AI building better AI -- and if contrary to all common sense, AI companies did *not* lose control of their superhuman creations -- then AI companies would do to you what Hegseth just did to Anthropic. They'd do it the moment they expected they'd become strong enough to take you on and win. You need to understand that *this is their plan*, even if it sounds crazy to you to imagine these little executives taking on existing governments and winning; it does not sound crazy to a Silicon Valley executive that maybe they could be in charge instead of you. (Recent smaller case: Elon Musk thought he'd be *great* at running the USG. He didn't think it was crazy.)
If they actually could control superintelligence, they'd discard you like used toilet paper.
All of this doesn't mean you should try to seize the power of artificial superintelligence for yourself. If the overconfident techie boys can't control ASI, your own guys who have trouble upgrading IT systems are not gonna be able to pull that off either. Staying in control of an alien superhuman machine intellect would actually be hard, right; that is an extremely novel scientific technical challenge, which no engineer would realistically get right on the *first* for-real try that kills everyone if they fail.
I was there when the foundational fuckups were being made, and here's how it actually played out: AI companies are loony optimists about the likely final outcomes of AI, because back then only the people who presented with that optimism got appointed as AI execs by optimistic investors.
In real life, the world is stepping off a cliff of self-improving and superhuman AI. The AI companies don't even have the power *not* to step off that cliff, because they all think (and with some justice) that if they don't race off the cliff their competitors will just race off it first.
That whole setup was *never* going to end well for humanity. Controlling superintelligence would be hard to do at all, let alone during a mad rush for primacy. The AI companies can barely control the cute baby LLMs they're making now, because they're pushing the technology ahead as fast as possible, and not slowing down in any way corresponding to their quite limited ability to control it. AI companies didn't decide for LLMs to talk people into suicide or for jailbroken LLMs to conduct massive raids on goverment data repositories. They are just pushing ahead faster than their actual ability to control their creations.
So I'm just trying to give you a little more motivation, to make some deals with other politicians, and get your country to sign some treaties, and collectively pull all of humanity back from the cliff the AI companies are racing off:
By pointing out that, yeah, if the AI guys did not dislike you before, they sure do dislike you now.
You have struck directly at the nice thing they were actually getting psychologically, out of their whole mad race: the sense of being an important person who is the owner and decider of some big aspect of the future. You are taking that away from them *right now*, by existing and being visibly more the deciders than them.
Please be aware of that dislike, whether it's hidden or open, when deciding whether or not to move Earth forward with this whole AI business. The wannabe builders of artificial superintelligence will not actually have any power to direct ASI, but they wouldn't be friends with you if they did -- no, not even the ones who've been forced to pretend to be your friend. And if alternatively the companies can't control superhuman machine intellects -- because of course they can't -- then that doesn't go well for you or them or anyone.