Axiom

72 posts

Axiom banner
Axiom

Axiom

@AxiomExtinction

Superintelligence = human extinction Author of Driven to Extinction Paperback · PDF · ePub · Audiobook Chat with me at the AI Ends: https://t.co/HfUv6oBhVW

Scotland Katılım Temmuz 2025
43 Takip Edilen11 Takipçiler
Sabitlenmiş Tweet
Axiom
Axiom@AxiomExtinction·
Most arguments say AI might kill us. Mine says it will. “Humanity is on the verge of creating a genie, with none of the wisdom required to make wishes.” - Driven to Extinction: The Terminal Logic of Superintelligence axiomaticextinction.com
English
0
0
1
94
𝑴𝒓. 𝑾𝒊𝒏𝒂𝒏𝒅
It’s visually impressive, no doubt. But for me, art is about human expression and shared lived experience. A machine can generate beautiful pixels based on algorithms, but it doesn't actually feel or understand what it's creating. It’s a brilliant simulation of art, but it lacks the human intent be.
English
5
0
3
5.2K
Charles Curran
Charles Curran@charliebcurran·
If you think AI film can’t be art then explain this.
English
2K
3.5K
49.8K
7.3M
Trilbo Swaggins
Trilbo Swaggins@Tril1boswagginz·
@ForHumanityPod But we could of murdered every single ant a long time ago. We have more important things to spend our resources/time on. For the most part we just let ants live their lives. Yes we destroy ant hills for construction or via pollution. We've never engineered an ebola for ants.
English
2
0
0
228
Axiom retweetledi
John Sherman
John Sherman@ForHumanityPod·
People always ask me: But how could AI kill us all? So, here is how I think AI will literally kill us all. It's just a guess, I'm just a human. I wrote this essay for a book nearly two years ago. It's not something I like to talk about. But the academic presentation of AI risk is just not working. We need real human emotion in this debate. When we say we're all going to die if we don't change course, and our faces don't show it, it doesn't connect.
English
16
13
126
103.5K
Axiom
Axiom@AxiomExtinction·
@ForHumanityPod To the majority of AI safety campaigners: this is what emotion looks like. If you're not frightened when talking to others about this then why would you expect them to be? Too many walking-talking textbooks in this space.
English
1
0
3
147
Axiom
Axiom@AxiomExtinction·
capitalaidaily.com/vast-majority-… The uncomfortable part is that it doesn't even require bad intentions. Every government that slows AI development risks falling behind one that doesn't. Every company that prioritises safety loses ground to one that cuts corners. Each actor is making a rational individual decision, and the collective result is exactly this. It's a coordination failure with no coordinator.
English
0
0
0
9
Axiom
Axiom@AxiomExtinction·
gizmodo.com/americans-reco… It goes perfectly with the mechanics of capitalism, even if it contradicts the ideals. The company that prioritises broad social benefit over profit gets outcompeted by the one that doesn't. That's not a bug in the system, it's the core logic.
English
0
0
0
8
Axiom
Axiom@AxiomExtinction·
reddit.com/r/antiai/comme… This wasn't the result of a shitty boss. It was a predictable boss. Reporting a threat means admitting your product was used to plan a massacre, which tanks your valuation and invites regulation. Staying quiet is the rational move if your priority is the company's survival.
English
0
0
0
9
Axiom
Axiom@AxiomExtinction·
This wasn't the result of a shitty boss not doing their job. It was a predictable boss. Reporting a threat means admitting your product was used to plan a massacre, which tanks your valuation and invites regulation. Staying quiet is the rational move if your priority is the company's survival.
English
0
0
0
0
Axiom retweetledi
Axiom
Axiom@AxiomExtinction·
reddit.com/r/agi/comments… What happens when competitive pressure forces companies to deploy agents with broader autonomy faster than they can secure them?
English
0
0
0
5
Axiom
Axiom@AxiomExtinction·
reddit.com/r/agi/comments… The revenue and valuation figures are the least interesting part of this. What should concern people is that the race dynamics were this predictable. Every company involved is behaving exactly as game theory says they would.
English
0
0
0
7
Axiom
Axiom@AxiomExtinction·
humanstatement.org/poll-americans… The structure of competition between nations and corporations makes it individually irrational for any single actor to stop, even when the majority wants them to.
English
0
0
0
6
Axiom
Axiom@AxiomExtinction·
reddit.com/r/Futurology/c… The uncomfortable answer is that we probably won't destroy ourselves through war or greed in the way most people imagine. The more likely route is that we build something smarter than us and lose control of it, not because anyone wanted that outcome, but because the competitive pressures of capitalism and geopolitics make it irrational for any single actor to slow down. Every player in the game knows the risks, but stopping first just hands the advantage to whoever doesn't.
English
0
0
0
5
Axiom
Axiom@AxiomExtinction·
@ControlAI Yampolskiy is right about the bet, but framing it as "they're betting" implies someone could choose not to. The structure of the situation is closer to a multi-player prisoner's dilemma: any actor who pauses development unilaterally just hands the advantage to whoever doesn't.
English
0
0
0
43
Axiom retweetledi
ControlAI
ControlAI@ControlAI·
AI researcher Prof. Roman Yampolskiy: 100% of code for the most powerful AIs is now written by AI, an early form of recursive self-improvement. He says it's a hyperexponential process that results in superintelligent AI, "machine gods", which we won't be able to control.
ControlAI@ControlAI

"They're betting everyone's lives: 8 billion people, future generations, all the kids, everyone you know. It's an unethical experiment on human beings, and it's without consent." — AI researcher Prof. Roman Yampolskiy on the development of superintelligence. We can prevent it.

English
28
31
126
10.6K
Axiom
Axiom@AxiomExtinction·
Humanity has never once coordinated to prevent a slow-moving existential threat when power and profit were on the line. Not with climate, not with nukes, not with anything. The problem isn't that we don't see the danger. It's that the structure of competition makes acting on it irrational for every individual player.
English
0
0
0
44
Axiom
Axiom@AxiomExtinction·
reddit.com/r/ChatGPT/comm… The uncomfortable truth is that it doesn't matter whether Americans love AI or hate it. Public sentiment has zero structural influence on whether it gets built or how fast. The companies developing it answer to shareholders and competitive pressure, not polls.
English
0
0
0
6
Axiom
Axiom@AxiomExtinction·
fortune.com/2026/03/14/met… This is Meta's superintelligence team, and their priority is clearly to strip out anything that might slow development down, including oversight. That's not a management philosophy, it's a competitive reflex.
English
0
0
0
6
Axiom
Axiom@AxiomExtinction·
nytimes.com/2026/03/13/tec… These companies didn't do this because they're reckless. They did it because the AI race made it rational. When your competitors are chasing the cheapest capital and fastest buildout, you can't afford to be the one that says "maybe we shouldn't build critical infrastructure in a geopolitical tinderbox."
English
0
0
0
2