Big Brain AI@realBigBrainAI
Sam Altman, CEO of OpenAI, outlined three ways AI could go wrong. The first two are the ones most people already argue about online. The third one is different (and it's the scariest!)
The first category is familiar: a government, a rogue group, anyone with the wrong intentions reaching superintelligence before the rest of the world has a version capable of defending against it.
He doesn't dismiss it:
"The bio capability of these models, the cybersecurity capability of these models, these are getting quite significant."
His warning: "I think the world is not taking us seriously."
The second is the classic sci-fi scenario of AI resisting shutdown.
"The AI is like, oh, I don't actually want you to turn me off. I'm afraid I can't do that."
He acknowledges it. He says there's significant work being done to prevent it. But it's the one category that still fits neatly onto a movie poster.
The third is harder to see coming: AI that doesn't rebel, but simply becomes so embedded in society that humans stop making meaningful decisions without it.
This is the one that keeps him up at night. The mechanism is simpler and more believable than any sci-fi plot: society becomes reliant on systems smarter than us and quietly hands over the wheel without ever deciding to.
"The models kind of accidentally take over the world. They never wake up. They never do the sci-fi thing. They never open the pod bay doors."
He calls it "loss of control."
He's already seeing the early version of it. Young people outsourcing every personal decision to ChatGPT, not just questions but what to do, who to trust, and how to feel.
"There's young people who just say, like, I can't make any decision in my life without telling ChatGPT everything that's going on."
Relying on a machine to live your life is a form of agency you've already given away.
And then he takes it somewhere harder.
"What if AI gets so smart that the president of the United States cannot do better than following ChatGPT's recommendation?"
In any individual case, following the smarter system might be the right call.
That's the trap.
"Society has collectively transitioned a significant part of decision making to this very powerful system that is learning from us, improving with us, evolving with us, but in ways we don't totally understand."
Millions of small, rational choices, each one sensible on its own, compound into something irreversible until the world can no longer stand without the thing it's leaning on.
The greatest risk from AI is trusting it so completely that we forget how to think for ourselves.