Ben Zweibelson

2.5K posts

Ben Zweibelson banner
Ben Zweibelson

Ben Zweibelson

@BZweibelson

Author, Director for Strategic Innovation at US Space Command.

Colorado Springs, CO Katılım Mayıs 2015
703 Takip Edilen948 Takipçiler
Ben Zweibelson
Ben Zweibelson@BZweibelson·
First chapter available from Bad War Stories at Substack with links to a brand new book review and where to pick up a copy now. Bad War Stories is shipping worldwide. Get a sample of the content here: open.substack.com/pub/zweibelson…
Ben Zweibelson tweet media
English
0
0
1
44
Ben Zweibelson
Ben Zweibelson@BZweibelson·
Bad War Stories ships TODAY from Amazon after a few logistical delays. They also have it on sale, and coincidentally today the first book review published too. Dr. Aaron "Jacko" Jackson, teaching at the Swedish Defence University in Stockholm and also an Australian Defence Force Army reservist, read an advanced copy of BWS and decided to write a unique book review done in similar fashion (3rd person, chronologically out of order, etc.). It is available here: cove.army.gov.au/article/unconv… The book is on Amazon available for immediate shipment here and 36% off: amazon.com/Bad-War-Storie…
Ben Zweibelson tweet media
English
0
1
1
138
Ben Zweibelson
Ben Zweibelson@BZweibelson·
'Reconceptualizing War' is on Amazon at its lowest price ever, with free shipping! If you have not picked up a copy and are interested in how social paradigm framing with war philosophy can establish various 'war paradigms' between adversaries, this may be the book for you. I spend multiple chapters on Clausewitz, Jomini, and Western modern warfighting... and then shift to how Marxist war thinking works in incommensurate ways on the battlefields of the 20th century and today (with Social Marxism, or 'wokefare' becoming highly relevant). Chapters also dive into China and Sino-Marxism, the Ideological Doomsday movements (ISIS, Hamas, and many more), with how anarchists make strange bedfellows with modern terrorists, environmental extremists, hacking groups, and Marxist activists promoting violence. amazon.com/Reconceptualiz…
Ben Zweibelson tweet media
English
0
0
1
52
Ben Zweibelson
Ben Zweibelson@BZweibelson·
I asked Grok4 to summarize in 300 words why readers should invest time in reading my new article, "Strong Artificial Intelligence: The Future of War and the Arrival of Techno-Eschatology" at MCUP. Here was the answer: "Ben Zweibelson's article, "Strong Artificial Intelligence: The Future of War and the Arrival of Techno-Eschatology," published in Expeditions with MCUP (February 2026), is essential reading for policymakers, leaders, and military professionals amid AI's rapid evolution. It compellingly argues that artificial general intelligence (AGI) could emerge within a decade, surpassing human cognition and revolutionizing warfare beyond traditional Clausewitzian paradigms. Zweibelson introduces "techno-eschatology," framing AGI as a singularity blending apocalyptic risks with redemptive potential, potentially rendering human forces obsolete. He warns of "phantasmal wars"—non-kinetic conflicts eroding shared reality via deepfakes, cyberattacks, and epistemological assaults—eclipsing nuclear threats as existential dangers. The piece categorizes AGI views into "boomers" (deniers), "doomers" (pessimists foreseeing extinction), and "groomers" (optimists advocating ethical development), urging a balanced approach. For policymakers and leaders, it advocates a "Manhattan AI Project" for the West to counter authoritarian rivals like China, with strategies like containment, nonproliferation treaties, and sanctions. Military professionals gain insights into adapting doctrines for deceptive, self-improving AI, preparing for an "AI Cold War" and the "singleton paradox" of monopolized power. Reading it invests time wisely by fostering urgency without alarmism, drawing on experts like Bostrom and Amodei. It catalyzes interdisciplinary thinking, prompting doctrinal shifts and ethical safeguards to navigate AGI's disruptions—ensuring security in a techno-transformed world." Link is here, no paywall: usmcu.edu/outreach/Marin…
Ben Zweibelson tweet media
English
0
0
2
116
Ben Zweibelson
Ben Zweibelson@BZweibelson·
@D_The_Husband Only people fixated on belt color look at belt color. Enjoy the longer journey and be color blind in BJJ.
English
0
1
1
68
D
D@D_The_Husband·
As a struggling blue belt who is currently in a plateau, I've come to peace with the fact that this will never end. Ups and downs with grappling. There is no 'beating jiu jitsu.' There is only moments where you things click and your body will now forever remember the movement and your mind the technique. But there will always be refinement. And your relationship with losing will evolve over time. You're not 'losing' as much as you are gaining insight into what you need to work on. And a tap is just a restart to try again. Before you know it (this is only a guess as I'm not there yet) you plateau in different stages on your grappling journey and your loses will be miles ahead of what you used to struggle with. I can't wait to help a struggling blue belt one day when I get to purple, brown, and black. How do you deal with plateaus?
English
39
3
78
4.2K
Steve Skojec
Steve Skojec@SteveSkojec·
This is a really thoughtful reflection. I didn’t intend to watch the whole thing, but I ended up doing it anyway. AI is like playing a hard game you can’t beat with cheat codes on. It’s amazing at first, but it becomes boring very quickly. But worse than that, it does something to your brain that ruins the game. If you turn the cheat codes off, you become acutely aware that you’re now struggling unnecessarily. You can’t forget how easy it was, but you don’t want it to be that easy because it takes all the fun out of it, but now the inability to unsee what you’ve seen creates a tension that causes you to lose interest in even continuing to play. The magic is gone. You’ve broken the spell. AI is doing this to life. And the societal consequences are going to be enormous.
Mo@atmoio

I was a 10x engineer. Now I'm useless.

English
134
271
3.4K
511.9K
Ben Zweibelson
Ben Zweibelson@BZweibelson·
This one got some traction recently; reposting for my network in case they missed it the first time around.
Ben Zweibelson@BZweibelson

Why I’m asking you to invest 90–120 focused minutes in a 50-page article on Strong AI and the future of war... Most of the conversation right now is exactly where the Department of War’s new AI Acceleration Strategy (Jan 2026) wants it: how do we move faster with today’s narrow AI? Swarms. Agents. Better targeting. Tighter OODA loops. Beat China in the race for incremental advantage. That race matters. But it’s the wrong race if Strong AI (AGI) arrives in the next 5–10 years. Or, we might win it to then lose it in ways that seem paradoxical yet will be precisely how the AGI genie escapes the lamp. In this new piece for Marine Corps University Press, I argue that AGI is not “narrow AI at higher speed and scale.” It is a phase change — the strategic equivalent of moving from atomic to thermonuclear weapons, except the explosion happens in epistemology and societal cohesion, not just physics. I call the new form phantasmal war: conflict where armies may never form because shared reality itself fractures first. I call the new philosophy techno-eschatology. You’ll meet the three tribes already fighting over this future (the Boomers who say “it’s just better tools,” the Doomers who see extinction, and the Groomers who see utopia). You’ll get a crisp distinction between today’s useful narrow AI and tomorrow’s incomprehensible, self-improving systems. And then we finish with a Fermi-paradox-style question that quietly reframes everything we think we know about deterrence, victory, and the profession of arms. Yes, it’s 50 pages. Yes, it will take real concentration. But the first 8–10 pages are written so that if you don’t feel the hook, you can walk away guilt-free. Most people who finish tell me it permanently shifts how they see the next decade; at least in the last week it has been online. : ) Free, open-access link below. No paywall, no gatekeeping. If you read it, I’d genuinely value your honest take in the comments — especially if you disagree. The only bad outcome is if we keep sprinting down the narrow-AI track while the ground under the entire board disappears. usmcu.edu/outreach/Marin… What do you think — worth the time, or am I over-hyping the shift?

English
0
0
1
105
Ben Zweibelson
Ben Zweibelson@BZweibelson·
New Substack free blog entry on a recent prompt engineering debacle I encountered this week, and what I learned from it. Link below provides the entire article and inside it, I host the HMTL links to a few articles I uploaded to the LLM to get the results. open.substack.com/pub/zweibelson…
Ben Zweibelson tweet media
English
0
0
2
107
Dominick Romano
Dominick Romano@dromanocpm·
Holy moly has my timeline missed this gem! Wow. As always I have to reiterate that @BZweibelson is one of of America's National Treasures when it comes to the intersection of innovation and war. His writings are teachings of where history holds the clues for the future.
Ben Zweibelson@BZweibelson

Why I’m asking you to invest 90–120 focused minutes in a 50-page article on Strong AI and the future of war... Most of the conversation right now is exactly where the Department of War’s new AI Acceleration Strategy (Jan 2026) wants it: how do we move faster with today’s narrow AI? Swarms. Agents. Better targeting. Tighter OODA loops. Beat China in the race for incremental advantage. That race matters. But it’s the wrong race if Strong AI (AGI) arrives in the next 5–10 years. Or, we might win it to then lose it in ways that seem paradoxical yet will be precisely how the AGI genie escapes the lamp. In this new piece for Marine Corps University Press, I argue that AGI is not “narrow AI at higher speed and scale.” It is a phase change — the strategic equivalent of moving from atomic to thermonuclear weapons, except the explosion happens in epistemology and societal cohesion, not just physics. I call the new form phantasmal war: conflict where armies may never form because shared reality itself fractures first. I call the new philosophy techno-eschatology. You’ll meet the three tribes already fighting over this future (the Boomers who say “it’s just better tools,” the Doomers who see extinction, and the Groomers who see utopia). You’ll get a crisp distinction between today’s useful narrow AI and tomorrow’s incomprehensible, self-improving systems. And then we finish with a Fermi-paradox-style question that quietly reframes everything we think we know about deterrence, victory, and the profession of arms. Yes, it’s 50 pages. Yes, it will take real concentration. But the first 8–10 pages are written so that if you don’t feel the hook, you can walk away guilt-free. Most people who finish tell me it permanently shifts how they see the next decade; at least in the last week it has been online. : ) Free, open-access link below. No paywall, no gatekeeping. If you read it, I’d genuinely value your honest take in the comments — especially if you disagree. The only bad outcome is if we keep sprinting down the narrow-AI track while the ground under the entire board disappears. usmcu.edu/outreach/Marin… What do you think — worth the time, or am I over-hyping the shift?

English
1
2
9
540
Ben Zweibelson
Ben Zweibelson@BZweibelson·
'Bad War Stories' is #1 new release in Amazon's Iraq History book category and ships on 05 MAR worldwide. These are a collection of real stories from my combat tours in Iraq and Afghanistan, cast in a 'Pulp Fiction' style meets 'Catch 22' and 'The Things They Carried'. You can order it now at the link below. Humbled to get wonderful endorsements from LTG(ret) Dan Bolger, Dr. Mark Lacy (Lancaster University) and Dr. Aaron Jackson (Swedish Defence University) so far. amazon.com/Bad-War-Storie…
Ben Zweibelson tweet media
English
1
1
4
413
Ben Zweibelson
Ben Zweibelson@BZweibelson·
"Reconceptualizing War" is on Amazon at the lowest price yet; $10 with free shipping. This deep dive into the philosophy of conflict and how we sociologically frame war is my 756 page opus. If you wonder about war theories such as how Clausewitz, Jomini, Sun Tzu, or the Marxist works of Marx, Lenin, Mao, and others relate to one another in the modern wars of the last century plus, this may be a fun book to dive into. I also deeply investigate ideological war movements (extremist terror organizations), death-wish cults, anarchists, ANTIFA, and movements that use Social Marxism (I term it 'wokefare') to disrupt and dismantle Westphalian nation states. If you already grabbed a copy, send me a book pic and let me know how you liked or disliked it. amazon.com/Reconceptualiz…
Ben Zweibelson tweet media
English
1
1
3
168
Ben Zweibelson
Ben Zweibelson@BZweibelson·
Why I’m asking you to invest 90–120 focused minutes in a 50-page article on Strong AI and the future of war... Most of the conversation right now is exactly where the Department of War’s new AI Acceleration Strategy (Jan 2026) wants it: how do we move faster with today’s narrow AI? Swarms. Agents. Better targeting. Tighter OODA loops. Beat China in the race for incremental advantage. That race matters. But it’s the wrong race if Strong AI (AGI) arrives in the next 5–10 years. Or, we might win it to then lose it in ways that seem paradoxical yet will be precisely how the AGI genie escapes the lamp. In this new piece for Marine Corps University Press, I argue that AGI is not “narrow AI at higher speed and scale.” It is a phase change — the strategic equivalent of moving from atomic to thermonuclear weapons, except the explosion happens in epistemology and societal cohesion, not just physics. I call the new form phantasmal war: conflict where armies may never form because shared reality itself fractures first. I call the new philosophy techno-eschatology. You’ll meet the three tribes already fighting over this future (the Boomers who say “it’s just better tools,” the Doomers who see extinction, and the Groomers who see utopia). You’ll get a crisp distinction between today’s useful narrow AI and tomorrow’s incomprehensible, self-improving systems. And then we finish with a Fermi-paradox-style question that quietly reframes everything we think we know about deterrence, victory, and the profession of arms. Yes, it’s 50 pages. Yes, it will take real concentration. But the first 8–10 pages are written so that if you don’t feel the hook, you can walk away guilt-free. Most people who finish tell me it permanently shifts how they see the next decade; at least in the last week it has been online. : ) Free, open-access link below. No paywall, no gatekeeping. If you read it, I’d genuinely value your honest take in the comments — especially if you disagree. The only bad outcome is if we keep sprinting down the narrow-AI track while the ground under the entire board disappears. usmcu.edu/outreach/Marin… What do you think — worth the time, or am I over-hyping the shift?
Ben Zweibelson tweet media
English
0
3
8
924
Ben Zweibelson
Ben Zweibelson@BZweibelson·
"Potentially, the next generation of AGI by 2040 could catapult civilization another century or two forward, providing in 2042 the ideas and solutions we were not expected to discover until the 2240s or 2340s. This sort of thinking may seem ridiculous today in 2026, when LLMs are still making obvious errors, hallucinations, and other examples of AI and programmer biases.  Yet, AI ethics are complex in that AGI should produce a superintelligence beyond what the human species is capable of naturally. Bostrom and Yudkowsky bait readers with: “How do you build an AI which, when it executes, becomes more ethical than you?” Yudkowsky and Soares might modify this into: “Or becomes far more capable in manipulating you so that you believe you are doing the ethically correct behavior that instead cedes advantage to the AGI.” This quickly moves into philosophical and existential discussions, which permits one concluding thought about the purpose of humanity in the vast cosmos. Although some readers might find such thinking too abstract for contemporary military affairs, humans really need to look to the stars above to consider why they are so far alone in the universe and able to wage war amongst themselves as they design it." Read more here at the Marine Corps University Press (MCUP): usmcu.edu/outreach/Marin…
Ben Zweibelson tweet media
English
0
0
2
222
Ben Zweibelson
Ben Zweibelson@BZweibelson·
"Unlike previous conflicts that, in how Carl von Clausewitz, Antoine-Henri Jomini, J. F. C. Fuller, and others articulated for a Westphalian, Newtonian, and Baconian context (the Free World), war is exercised to a decisive and violent conclusion in the physical world against state instruments of power (armies) to collapse political will to resist (morale, esprit de corps, societal determination), a phantasmal war does something else. The objective becomes not the physical things and collective will of a society in conflict, but the order and rational of an opponent’s shared sense of reality—and by breaking it, those human participants no longer can participate or even comprehend the conflict unfolding. The distinction is between that of how humans have understood reality using their evolutionary yet biologically limited gifts of superior cognition to dominate the entire ecosystem of this planet and that of humans creating AI that vastly outperforms them cognitively, can effortlessly reproduce and improve itself, and can play the base motives and behaviors of humans against themselves." Read more with free PDF download at the Marine Corps University Press: usmcu.edu/outreach/Marin…
Ben Zweibelson tweet media
English
0
1
1
77
Ben Zweibelson
Ben Zweibelson@BZweibelson·
"Today, AI cannot produce phantasmal conflict, although the myriad technological and sociological effects of widespread AI usage does carry echoes of what AGI will accelerate in scale, scope, and potency. AGI will be the ultimate war machine for generating phantasmal warfare, not because AGI cannot win in traditional conflicts designed with physical things and kinetic destruction. AGI will instead saturate the physical and social reality that humans rely upon with disruptions, distortions, and misinformation so convincingly real that many will be unable to distinguish between fantasy and reality. AGI could, for example, simultaneously execute at scale countless unattributable cyberattacks, collapse financial markets, flood social media with hyper-realistic deepfakes (or entirely AI designed false content), paralyze critical infrastructure, or collapse many of the essential governmental guardrails that regulate a normal society without firing a single bullet. Although certain military targets would require kinetic responses, the real battlefield for phantasmal war is within individual human minds and across the entire societally maintained construction of reality. Perception itself would be under constant attack, with an AGI adversary everywhere and nowhere simultaneously, and the total collapse of meaning becoming far more devastating than even nuclear devastation.  Although this seems rather academically obtuse, AGI is the technological gateway to depart from traditional warfare into something far more horrifying within our collective minds. AGI-enabled phantasmal war targets an adversary’s societal epistemology, which is their collective ability to distinguish fact from fiction, up from down, and right from wrong. Winning in wars like this has less to do with whether an army is defeated or not but whether that society even knows what is real and whether they are really free or simply confined in some prison they cannot even comprehend." Read more at the Marine Corps University Press: usmcu.edu/outreach/Marin…
Ben Zweibelson tweet media
English
0
0
0
48
Ben Zweibelson
Ben Zweibelson@BZweibelson·
@BjjTip IMO: Hip position is key: most any wrestling throw requires you lowering your hips below opponent, then moving your hips in (close to theirs, under theirs) for dynamic movement of throw. Failure to do either means opponent should counter. The shoulder follows the hips.
English
2
0
1
18
Ben Zweibelson
Ben Zweibelson@BZweibelson·
New Substack up on the space domain and future. This comes from a draft chapter I am preparing for a textbook for International Relations and Security Affairs undergraduate students. Available here: open.substack.com/pub/zweibelson…
Ben Zweibelson tweet media
English
0
0
2
78