Kira (Hindsight Capital)@Klaudnin3
The 2028 Global Intelligence Crisis That Wasn't
A Macro Memo from the Actual June 2028, Not the Fanfic Version
The unemployment rate printed 3.8% this morning, roughly where it's been all year. The market yawned. The S&P 500 is at 7,400, which is somehow both a record high and a disappointment to people who were promised 10,000 by every DCF model with a "AI Upside Case" tab.
We are writing this memo because in February 2026, a widely circulated Substack piece predicted that by this exact date, the S&P would be down 38%, unemployment would be 10.2%, and the mortgage market would be in free fall. It was beautifully written, rigorously structured, and wrong about nearly everything. We feel it is our duty — nay, our privilege— to conduct the post-mortem.
In the authors' defense, it was explicitly labeled a "scenario, not a prediction." In our defense, 2,321 people liked it and several macro Twitter accounts made it their entire personality for six months.
How It Actually Started
In late 2025, agentic coding tools did indeed take a step function jump in capability. The Citrini memo predicted that a competent developer could now "replicate the core functionality of a mid-market SaaS product in weeks."
This was true! What the memo failed to mention was that a competent developer could also replicate the core functionality of a mid-market SaaS product in weeks in 2019. The difference was that back then, nobody did it because maintaining software is horrible, and in 2026, nobody did it because maintaining software is still horrible.
The procurement manager at the Fortune 500 who told the vendor he'd been "in conversations with OpenAI about replacing them entirely"? He got his 30% discount, then spent the next eighteen months trying to get his internal AI prototype to handle SSO correctly. It could write a Shakespearean sonnet about SAML authentication but could not, for the life of it, actually implement SAML authentication without hallucinating an endpoint that didn't exist. He renewed the vendor contract at full price the following year.
The memo predicted ServiceNow's $NOW net new ACV growth would decelerate to 14% as customers cut seats. In reality, ServiceNow reported accelerating growth in 2027 because — and this is the part the doom thesis always misses — the AI agents that companies deployed generated more workflow tickets, not fewer. Every autonomous agent needed monitoring, logging, exception handling, and escalation paths. ServiceNow didn't sell fewer seats. They sold seats to robots.
SERVICENOW Q3 2027: "AI AGENT MANAGEMENT" BECOMES FASTEST-GROWING MODULE; CEO JOKES "OUR BEST CUSTOMERS ARE NOW NON-HUMAN" | Bloomberg, October 2027
The Friction That Refused to Die
The Citrini memo's most elegant argument was that AI agents would eliminate friction, and that trillions in enterprise value depended on friction persisting. Subscriptions that passively renewed, insurance policies nobody re-shopped, delivery apps that exploited laziness — all would be ruthlessly optimized away.
Here's what actually happened with subscriptions: AI agents did start cancelling unused subscriptions on behalf of users. Subscription companies responded by making cancellation flows so Byzantine that the AI agents needed other AI agents to navigate them. An arms race ensued. By Q2 2027, the average subscription cancellation flow involved a 47-step conversational gauntlet with an AI retention specialist. The median consumer's agent spent more tokens trying to cancel a $9.99/month meditation app than the consumer had spent meditating in the entire previous year.
Net result on subscription revenue: approximately zero.
The memo predicted agents would disintermediate travel booking platforms. In practice, when agents assembled "optimal" itineraries, they produced trips that were technically cheaper but involved three layovers, a 4am bus transfer in Ljubljana, and a hotel 45 minutes from the city center with a 4.1-star rating that turned out to be an Airbnb above a nightclub. Consumers used the agent, looked at its itinerary, said "absolutely not," and went back to $BKNG.
It turns out that what humans call "preferences" and what a cost-optimization function calls "irrational friction" are the same thing. People don't want the cheapest flight. They want the one that doesn't leave at 5am. We knew this. We have always known this. We briefly forgot because a Substack told us machines would make us rational.
The DoorDash $DASH Thesis, or "You Underestimate How Lazy People Are"
The memo called DoorDash the "poster child" of habitual intermediation destruction. Agents would compare twenty delivery apps and pick the cheapest. Vibe-coded competitors would flood the market. DoorDash's moat of "you're hungry, you're lazy, this is the app on your home screen" would evaporate.
Counterpoint: have you met people?
The vibe-coded delivery competitors did indeed launch. Dozens of them. They had names like Fetchr, GrubAgent, NomNom AI, and — we are not making this up — "Deliver.sol." They offered lower fees by passing 90-95% through to drivers.
They also had no customer service, no restaurant onboarding team, no logistics optimization, no insurance, and no way to handle the moment when a driver ate half your order and marked it "delivered." The apps worked flawlessly in demo videos and catastrophically in the rain on a Friday night in Brooklyn. By Q3 2027, the subreddit r/VibecodeDeliveryHorror had 400,000 subscribers and a pinned post titled "My agent ordered me sushi from a restaurant that closed in 2019."
DoorDash stock is up 35% from the date of the Citrini memo.
The Payments Armageddon That Wasn't
Perhaps the most creative prediction was that AI agents would route around card interchange using stablecoins, destroying Visa / $V, Mastercard / $MA, and American Express $AXP.
What actually happened: agents tried to pay with stablecoins. Merchants said no. Not because they couldn't accept them, but because the fraud liability framework for stablecoin payments did not exist, and no CFO in America was going to accept payment in magic internet money to save 2% on interchange when the chargeback protections that interchange funded were the only thing standing between them and an army of AI agents submitting fraudulent refund claims.
That's the thing nobody modeled. AI didn't just empower consumers. It empowered fraud. The same agents that could price-optimize your protein bars could also generate synthetic identities, file fake chargebacks, and exploit return policies at scale. Visa and Mastercard's moat turned out not to be friction — it was trust infrastructure. When fraud exploded in early 2027, merchants practically begged to keep paying interchange.
MASTERCARD Q1 2028: NET REVENUES +11% Y/Y; CEO CITES "UNPRECEDENTED DEMAND FOR AI-POWERED FRAUD DETECTION SUITE" AND "RETURN TO CARD RAILS FROM ALTERNATIVE PAYMENT EXPERIMENTS" | Bloomberg, April 2028
Mastercard didn't die. It sold the antidote.
The Mortgage Crisis That Was Actually Just San Francisco Being San Francisco
The memo's most alarming prediction was that the $13 trillion mortgage market would crack because white-collar workers would lose their income and default on their loans.
What actually happened in housing: San Francisco home prices did decline, approximately 8% peak-to-trough. This was treated as a national emergency by San Francisco homeowners and as "Tuesday" by everyone who'd watched San Francisco home prices fall 8% roughly every four years since the city was founded.
The national housing market was fine, because the national housing market has a problem that is far more powerful than AI displacement: there aren't enough houses. The US has been underbuilding for fifteen years. A structural housing shortage does not resolve because some product managers in SOMA lost their jobs. If anything, the modest cooling in tech-heavy metros made housing more affordable for the nurses, teachers, and tradespeople who'd been priced out — people whose jobs, it should be noted, AI has not disrupted in any meaningful way.
The 780-FICO borrowers the memo flagged? Most of them had two-income households, 30-year fixed mortgages locked at 3-4% in 2020-2021, and six months of savings. The ones who lost their jobs found new ones — not always at the same pay, but enough to make a mortgage payment that was locked in at 2021 rates. Turns out a $2,400/month mortgage is pretty easy to service even at $120k instead of $180k, especially when your rate is 3.25% and the alternative is paying $3,500/month in rent.
FANNIE MAE: SERIOUS DELINQUENCY RATE REMAINS AT 0.6%, NEAR ALL-TIME LOWS; "AI DISPLACEMENT CONCERNS HAVE NOT MATERIALIZED IN CREDIT PERFORMANCE" | Fannie Mae Q2 2028 Credit Supplement
The Job Market: Disrupted, Not Destroyed
We are not going to pretend that AI has had zero impact on employment. It has. The labor market is different. Some categories of work have genuinely contracted — particularly rote analytical work, first-draft content generation, and basic code production.
But the Citrini memo made the classic futurist error: it modeled job destruction in high resolution and job creationin zero resolution. It said AI "created new jobs" but "for every new role AI created, it rendered dozens obsolete." This sounded profound and was completely made up.
Here's what they missed:
1. AI made existing jobs bigger, not extinct. The product manager at Salesforce didn't get replaced by Claude. She used Claude to do the work of three product managers, got promoted, and now manages a portfolio twice the size. Companies didn't fire 60% of their PMs. They gave the surviving PMs AI tools and expanded their scope. Headcount was flat. Output tripled.
2. The "build it yourself" thesis created more jobs than it destroyed. All those companies that tried to replace their SaaS vendors with internal AI-built tools? They needed people to manage those tools. A new class of "AI operations" roles emerged — not the fake "prompt engineer" jobs from 2023, but genuine systems integration, agent orchestration, and reliability engineering roles. The BLS hasn't even finished categorizing them yet.
3. Humans got weird. The fastest-growing job categories of 2027-2028 were things nobody predicted: AI output auditors, "authenticity consultants" for brands that wanted to prove their content was human-made, in-person experience designers (turns out when everything digital gets commoditized, people pay more for analog), and — our personal favorite — professional "vibe curators" for corporate events, which is just party planning with a $300/hour rate and a LinkedIn title.
The unemployment rate is 3.8%. It was 3.7% when the memo was written. The composition has shifted, but the apocalypse has not arrived.
The Real Feedback Loop They Missed
The Citrini memo described a "negative feedback loop with no natural brake." AI gets better → companies cut workers → workers spend less → economy weakens → companies buy more AI → repeat until civilization collapses.
The natural brake they missed was called "shareholders."
When companies cut too aggressively, quality collapsed. The first wave of AI-driven layoffs in 2026 did boost margins. The second wave, in early 2027, started producing disasters. AI-generated customer communications that were subtly unhinged. Product launches with no human gut-check that flopped spectacularly. Legal filings with hallucinated case citations (again). A major airline's AI-managed pricing engine that accidentally sold 40,000 business class tickets from New York to London for $12 each before a human noticed.
UNITED AIRLINES Q2 2027: $380M CHARGE RELATED TO "AUTONOMOUS PRICING SYSTEM ERROR"; CEO ANNOUNCES "HUMAN-IN-THE-LOOP" MANDATE FOR ALL REVENUE MANAGEMENT SYSTEMS | Bloomberg, July 2027
Companies re-hired. Not to the same levels, and not the same roles. But the "fire everyone, let the robots handle it" thesis ran directly into the wall of "the robots are confidently wrong 3% of the time and that 3% is extremely expensive."
The negative feedback loop had a natural brake, and its name was liability.
India, Actually
The memo predicted India's IT services sector would collapse, the rupee would crash 18%, and the IMF would come knocking.
What actually happened: TCS, Infosys, and Wipro did see growth slow in traditional staff augmentation. They responded by — and stop us if you've heard this before — selling AI services. It turns out that the same cost arbitrage that made Indian developers attractive for manual coding also makes Indian firms attractive for AI implementation, training, and management. They pivoted from "we'll give you 500 developers" to "we'll give you 50 developers and 450 AI agents managed by our platform."
The rupee is roughly where it was in February 2026. The IMF has not called.
What We Actually Got Right and Wrong
The bears got right: AI is transforming the economy. Wage growth for certain white-collar categories has stagnated. Inequality has widened. The political tensions around AI are real and growing. Some business models — particularly those built purely on information asymmetry — are under genuine pressure.
The bears got wrong: The speed, the severity, and the linearity. The Citrini memo extrapolated every trend at its maximum velocity for 28 months and assumed no adaptation, no friction, no regulatory response, no human irrationality, no corporate incompetence, and no second-order effects that cut the other way.
In short, they modeled the economy as a physics problem and forgot it's a biological one. Systems adapt. Humans are stubborn. Institutions are slow but not dead. And the most powerful force in the American economy is not artificial intelligence.
It's inertia.
Closing
We say this with genuine respect for the original authors: it was a good piece. Thoughtful, well-structured, and asking the right questions. The scenario was worth gaming out. But the scenario assumed a frictionless spherical economy in a vacuum, and we live in a world where a Fortune 500 company once took nine months to change its font.
The canary is still alive. It just learned to use ChatGPT and is now posting on LinkedIn about its "AI-augmented singing journey."
The S&P is at 7,400. The mortgage market is fine. DoorDash still has a 28% take rate. And somewhere, a procurement manager is telling a SaaS vendor he could replace them with AI, while secretly praying they don't call his bluff.
Disclaimer: This is a rebuttal, not a prediction. If the 2028 Global Intelligence Crisis actually happens, please don't forward this back to us.