Milind Mehere retweetledi
Milind Mehere
3.4K posts

Milind Mehere
@MilindMehere
Founder @yieldstreet co-founder @yodle Entrepreneur, idea generator, boston sports fan, scotch aficionado, change agent, loves disruptive models
New York City Katılım Haziran 2009
764 Takip Edilen1.2K Takipçiler
Milind Mehere retweetledi
Milind Mehere retweetledi

Second mover advantage has always been powerful. AltaVista, Google. Myspace, Facebook. AppDynamics, Datadog. OpenAI, Anthropic. We're seeing this at GC with @WeAreLegora in legal and @parloa_ai in CX.
Every tech era has a different source of 2nd mover advantage. internet era = distribution and reaching users at scale. cloud era = design and building workflows that got people to use tech.
Today we are in the AI era. Intelligence is an ever improving input. Do you have the right models, infra, and capability to build products that deliver what AI unlocks for customers?
founders shouldn’t be afraid if someone is currently the leader. Most industries have not yet seen the dominant stack.
English
Milind Mehere retweetledi
Milind Mehere retweetledi

>be Travis Kalanick
>born 1976 in Los Angeles
>middle class, San Fernando Valley
1998:
>co-found Scour
>peer-to-peer file sharing
>like Napster but for everything
>Hollywood notices
>they don't like it
2000:
>33 entertainment companies sue
>$250 billion lawsuit
>company goes bankrupt
>you're 24
>first company: destroyed by lawyers
2001:
>start Red Swoosh
>same technology, legal this time
>grind for 6 years
>company almost dies multiple times
>sleep on floors
>keep going
2007:
>sell Red Swoosh to Akamai
>$19 million
>your cut: about $2 million
>not rich, but free
2008:
>vacation in Paris with Garrett Camp
>can't get a cab
>Garrett: "what if you could request a car from your phone?"
>the idea sticks
2009:
>co-found UberCab
>start in San Francisco
>push a button, get a ride
>simple
>revolutionary
2010-2014:
>regulators threaten shutdown
>you launch anyway
>create demand, let users scream at politicians
>politicians back down
>repeat in every city
>growth at all costs
the tactics:
>Greyball: identify regulators, show them fake cars
>Hell: track Lyft drivers, poach them
>move fast and break things
>break a lot of things
2016:
>Uber valued at $70 billion
>fastest-growing startup in history
>you own 10%
>on paper, worth $7 billion
2017:
>everything collapses
>Susan Fowler's blog post
>sexual harassment, toxic culture exposed
>your mom dies in a boating accident
>dad seriously injured
>five days later, investors hand you a letter
>demand you resign
>you're getting coup'd
>by people you made rich
the manifesto (years later):
>"I left Uber in 2017 heartbroken"
>"I had been torn away from an idea and a movement I had poured my life into"
>"an investor decided to exploit this vulnerable moment"
>"I bled, but I did not perish"
2017-2018:
>sell most of your Uber stake
>$2.7 billion total
>start something new
>but go dark
>full stealth
City Storage Systems:
>the name nobody recognizes
>purposely obscure
>employees not allowed to put it on LinkedIn
>thousands of employees
>invisible company
>"we've been in stealth mode for eight years"
CloudKitchens:
>ghost kitchens for delivery apps
>no dining rooms, just cooking
>raise $400 million from Saudi Arabia
>"can you get prepared food delivered so efficiently it approaches grocery store cost?"
>"if you do, you do to the kitchen what Uber did to the car"
March 2026:
>emerge from stealth
>rename everything to Atoms
>not just food anymore
>robotics for food, mining, transportation
>"gainfully employed robots"
>specialized machines, not humanoids
>"the industrial thing is probably our main jam"
the portfolio:
>Atoms Food: "infrastructure for better food"
>Atoms Mining: "more productive mines"
>Atoms Transport: "wheelbase for robots"
>acquiring Pronto, Anthony Levandowski's autonomous mining startup
>yes, the guy from the Waymo lawsuit
>the band is getting back together
the backing:
>reportedly has major support from Uber
>the company that kicked him out
>now funding his return
>irony is not dead
the manifesto:
>"I got back up and fought my way back into the arena"
>"back to my calling"
>"back to building"
>"digitizing the physical world is my life's work"
from $250 billion lawsuit at 24
>to building the biggest startup in history
>to getting coup'd at 40
>to eight years in stealth
>to emerging with thousands of employees and robots
Travis Kalanick.
the most aggressive founder of his generation.


English
Milind Mehere retweetledi
Milind Mehere retweetledi
Milind Mehere retweetledi

I'm joining SpaceX and xAI, working closely with Elon and team to build superintelligence.
Together SpaceX and xAI combine physical and digital intelligence under a leader who understands hardware at the deepest level. Add a high-agency culture with frontier-scale resources, and you get the possibility to achieve something truly unique.
I’m excited to advance the fields I’ve obsessed over for years, from robotics research to building AI models on the founding teams of Mistral and TML. Both were extraordinary journeys with extraordinary people that shaped how I think about building intelligence from the ground up.
Grateful for everything that brought me here and can’t wait to get started.

English
Milind Mehere retweetledi
Milind Mehere retweetledi
Milind Mehere retweetledi
Milind Mehere retweetledi

Sequoia (@sequoia) Partner @JulienBek tells us why the next $1Trillion company will be a software company masquerading as a services firm:
"Ultimately, if you look at the TAM today, for every dollar that you spend on software, $6 are spent on services".
"If you sell the tools, the models are getting better and better and so you're at risk... whereas, if you sell the services, you're actually delivering outcomes."
"Until now, we could really just go after the $1, but now with services first and human at the centre, we think you can capture the six".
Julien Bek@JulienBek
English
Milind Mehere retweetledi
Milind Mehere retweetledi
Milind Mehere retweetledi
Milind Mehere retweetledi
Milind Mehere retweetledi
Milind Mehere retweetledi
Milind Mehere retweetledi

The 2028 Global Intelligence Crisis That Wasn't
A Macro Memo from the Actual June 2028, Not the Fanfic Version
The unemployment rate printed 3.8% this morning, roughly where it's been all year. The market yawned. The S&P 500 is at 7,400, which is somehow both a record high and a disappointment to people who were promised 10,000 by every DCF model with a "AI Upside Case" tab.
We are writing this memo because in February 2026, a widely circulated Substack piece predicted that by this exact date, the S&P would be down 38%, unemployment would be 10.2%, and the mortgage market would be in free fall. It was beautifully written, rigorously structured, and wrong about nearly everything. We feel it is our duty — nay, our privilege— to conduct the post-mortem.
In the authors' defense, it was explicitly labeled a "scenario, not a prediction." In our defense, 2,321 people liked it and several macro Twitter accounts made it their entire personality for six months.
How It Actually Started
In late 2025, agentic coding tools did indeed take a step function jump in capability. The Citrini memo predicted that a competent developer could now "replicate the core functionality of a mid-market SaaS product in weeks."
This was true! What the memo failed to mention was that a competent developer could also replicate the core functionality of a mid-market SaaS product in weeks in 2019. The difference was that back then, nobody did it because maintaining software is horrible, and in 2026, nobody did it because maintaining software is still horrible.
The procurement manager at the Fortune 500 who told the vendor he'd been "in conversations with OpenAI about replacing them entirely"? He got his 30% discount, then spent the next eighteen months trying to get his internal AI prototype to handle SSO correctly. It could write a Shakespearean sonnet about SAML authentication but could not, for the life of it, actually implement SAML authentication without hallucinating an endpoint that didn't exist. He renewed the vendor contract at full price the following year.
The memo predicted ServiceNow's $NOW net new ACV growth would decelerate to 14% as customers cut seats. In reality, ServiceNow reported accelerating growth in 2027 because — and this is the part the doom thesis always misses — the AI agents that companies deployed generated more workflow tickets, not fewer. Every autonomous agent needed monitoring, logging, exception handling, and escalation paths. ServiceNow didn't sell fewer seats. They sold seats to robots.
SERVICENOW Q3 2027: "AI AGENT MANAGEMENT" BECOMES FASTEST-GROWING MODULE; CEO JOKES "OUR BEST CUSTOMERS ARE NOW NON-HUMAN" | Bloomberg, October 2027
The Friction That Refused to Die
The Citrini memo's most elegant argument was that AI agents would eliminate friction, and that trillions in enterprise value depended on friction persisting. Subscriptions that passively renewed, insurance policies nobody re-shopped, delivery apps that exploited laziness — all would be ruthlessly optimized away.
Here's what actually happened with subscriptions: AI agents did start cancelling unused subscriptions on behalf of users. Subscription companies responded by making cancellation flows so Byzantine that the AI agents needed other AI agents to navigate them. An arms race ensued. By Q2 2027, the average subscription cancellation flow involved a 47-step conversational gauntlet with an AI retention specialist. The median consumer's agent spent more tokens trying to cancel a $9.99/month meditation app than the consumer had spent meditating in the entire previous year.
Net result on subscription revenue: approximately zero.
The memo predicted agents would disintermediate travel booking platforms. In practice, when agents assembled "optimal" itineraries, they produced trips that were technically cheaper but involved three layovers, a 4am bus transfer in Ljubljana, and a hotel 45 minutes from the city center with a 4.1-star rating that turned out to be an Airbnb above a nightclub. Consumers used the agent, looked at its itinerary, said "absolutely not," and went back to $BKNG.
It turns out that what humans call "preferences" and what a cost-optimization function calls "irrational friction" are the same thing. People don't want the cheapest flight. They want the one that doesn't leave at 5am. We knew this. We have always known this. We briefly forgot because a Substack told us machines would make us rational.
The DoorDash $DASH Thesis, or "You Underestimate How Lazy People Are"
The memo called DoorDash the "poster child" of habitual intermediation destruction. Agents would compare twenty delivery apps and pick the cheapest. Vibe-coded competitors would flood the market. DoorDash's moat of "you're hungry, you're lazy, this is the app on your home screen" would evaporate.
Counterpoint: have you met people?
The vibe-coded delivery competitors did indeed launch. Dozens of them. They had names like Fetchr, GrubAgent, NomNom AI, and — we are not making this up — "Deliver.sol." They offered lower fees by passing 90-95% through to drivers.
They also had no customer service, no restaurant onboarding team, no logistics optimization, no insurance, and no way to handle the moment when a driver ate half your order and marked it "delivered." The apps worked flawlessly in demo videos and catastrophically in the rain on a Friday night in Brooklyn. By Q3 2027, the subreddit r/VibecodeDeliveryHorror had 400,000 subscribers and a pinned post titled "My agent ordered me sushi from a restaurant that closed in 2019."
DoorDash stock is up 35% from the date of the Citrini memo.
The Payments Armageddon That Wasn't
Perhaps the most creative prediction was that AI agents would route around card interchange using stablecoins, destroying Visa / $V, Mastercard / $MA, and American Express $AXP.
What actually happened: agents tried to pay with stablecoins. Merchants said no. Not because they couldn't accept them, but because the fraud liability framework for stablecoin payments did not exist, and no CFO in America was going to accept payment in magic internet money to save 2% on interchange when the chargeback protections that interchange funded were the only thing standing between them and an army of AI agents submitting fraudulent refund claims.
That's the thing nobody modeled. AI didn't just empower consumers. It empowered fraud. The same agents that could price-optimize your protein bars could also generate synthetic identities, file fake chargebacks, and exploit return policies at scale. Visa and Mastercard's moat turned out not to be friction — it was trust infrastructure. When fraud exploded in early 2027, merchants practically begged to keep paying interchange.
MASTERCARD Q1 2028: NET REVENUES +11% Y/Y; CEO CITES "UNPRECEDENTED DEMAND FOR AI-POWERED FRAUD DETECTION SUITE" AND "RETURN TO CARD RAILS FROM ALTERNATIVE PAYMENT EXPERIMENTS" | Bloomberg, April 2028
Mastercard didn't die. It sold the antidote.
The Mortgage Crisis That Was Actually Just San Francisco Being San Francisco
The memo's most alarming prediction was that the $13 trillion mortgage market would crack because white-collar workers would lose their income and default on their loans.
What actually happened in housing: San Francisco home prices did decline, approximately 8% peak-to-trough. This was treated as a national emergency by San Francisco homeowners and as "Tuesday" by everyone who'd watched San Francisco home prices fall 8% roughly every four years since the city was founded.
The national housing market was fine, because the national housing market has a problem that is far more powerful than AI displacement: there aren't enough houses. The US has been underbuilding for fifteen years. A structural housing shortage does not resolve because some product managers in SOMA lost their jobs. If anything, the modest cooling in tech-heavy metros made housing more affordable for the nurses, teachers, and tradespeople who'd been priced out — people whose jobs, it should be noted, AI has not disrupted in any meaningful way.
The 780-FICO borrowers the memo flagged? Most of them had two-income households, 30-year fixed mortgages locked at 3-4% in 2020-2021, and six months of savings. The ones who lost their jobs found new ones — not always at the same pay, but enough to make a mortgage payment that was locked in at 2021 rates. Turns out a $2,400/month mortgage is pretty easy to service even at $120k instead of $180k, especially when your rate is 3.25% and the alternative is paying $3,500/month in rent.
FANNIE MAE: SERIOUS DELINQUENCY RATE REMAINS AT 0.6%, NEAR ALL-TIME LOWS; "AI DISPLACEMENT CONCERNS HAVE NOT MATERIALIZED IN CREDIT PERFORMANCE" | Fannie Mae Q2 2028 Credit Supplement
The Job Market: Disrupted, Not Destroyed
We are not going to pretend that AI has had zero impact on employment. It has. The labor market is different. Some categories of work have genuinely contracted — particularly rote analytical work, first-draft content generation, and basic code production.
But the Citrini memo made the classic futurist error: it modeled job destruction in high resolution and job creationin zero resolution. It said AI "created new jobs" but "for every new role AI created, it rendered dozens obsolete." This sounded profound and was completely made up.
Here's what they missed:
1. AI made existing jobs bigger, not extinct. The product manager at Salesforce didn't get replaced by Claude. She used Claude to do the work of three product managers, got promoted, and now manages a portfolio twice the size. Companies didn't fire 60% of their PMs. They gave the surviving PMs AI tools and expanded their scope. Headcount was flat. Output tripled.
2. The "build it yourself" thesis created more jobs than it destroyed. All those companies that tried to replace their SaaS vendors with internal AI-built tools? They needed people to manage those tools. A new class of "AI operations" roles emerged — not the fake "prompt engineer" jobs from 2023, but genuine systems integration, agent orchestration, and reliability engineering roles. The BLS hasn't even finished categorizing them yet.
3. Humans got weird. The fastest-growing job categories of 2027-2028 were things nobody predicted: AI output auditors, "authenticity consultants" for brands that wanted to prove their content was human-made, in-person experience designers (turns out when everything digital gets commoditized, people pay more for analog), and — our personal favorite — professional "vibe curators" for corporate events, which is just party planning with a $300/hour rate and a LinkedIn title.
The unemployment rate is 3.8%. It was 3.7% when the memo was written. The composition has shifted, but the apocalypse has not arrived.
The Real Feedback Loop They Missed
The Citrini memo described a "negative feedback loop with no natural brake." AI gets better → companies cut workers → workers spend less → economy weakens → companies buy more AI → repeat until civilization collapses.
The natural brake they missed was called "shareholders."
When companies cut too aggressively, quality collapsed. The first wave of AI-driven layoffs in 2026 did boost margins. The second wave, in early 2027, started producing disasters. AI-generated customer communications that were subtly unhinged. Product launches with no human gut-check that flopped spectacularly. Legal filings with hallucinated case citations (again). A major airline's AI-managed pricing engine that accidentally sold 40,000 business class tickets from New York to London for $12 each before a human noticed.
UNITED AIRLINES Q2 2027: $380M CHARGE RELATED TO "AUTONOMOUS PRICING SYSTEM ERROR"; CEO ANNOUNCES "HUMAN-IN-THE-LOOP" MANDATE FOR ALL REVENUE MANAGEMENT SYSTEMS | Bloomberg, July 2027
Companies re-hired. Not to the same levels, and not the same roles. But the "fire everyone, let the robots handle it" thesis ran directly into the wall of "the robots are confidently wrong 3% of the time and that 3% is extremely expensive."
The negative feedback loop had a natural brake, and its name was liability.
India, Actually
The memo predicted India's IT services sector would collapse, the rupee would crash 18%, and the IMF would come knocking.
What actually happened: TCS, Infosys, and Wipro did see growth slow in traditional staff augmentation. They responded by — and stop us if you've heard this before — selling AI services. It turns out that the same cost arbitrage that made Indian developers attractive for manual coding also makes Indian firms attractive for AI implementation, training, and management. They pivoted from "we'll give you 500 developers" to "we'll give you 50 developers and 450 AI agents managed by our platform."
The rupee is roughly where it was in February 2026. The IMF has not called.
What We Actually Got Right and Wrong
The bears got right: AI is transforming the economy. Wage growth for certain white-collar categories has stagnated. Inequality has widened. The political tensions around AI are real and growing. Some business models — particularly those built purely on information asymmetry — are under genuine pressure.
The bears got wrong: The speed, the severity, and the linearity. The Citrini memo extrapolated every trend at its maximum velocity for 28 months and assumed no adaptation, no friction, no regulatory response, no human irrationality, no corporate incompetence, and no second-order effects that cut the other way.
In short, they modeled the economy as a physics problem and forgot it's a biological one. Systems adapt. Humans are stubborn. Institutions are slow but not dead. And the most powerful force in the American economy is not artificial intelligence.
It's inertia.
Closing
We say this with genuine respect for the original authors: it was a good piece. Thoughtful, well-structured, and asking the right questions. The scenario was worth gaming out. But the scenario assumed a frictionless spherical economy in a vacuum, and we live in a world where a Fortune 500 company once took nine months to change its font.
The canary is still alive. It just learned to use ChatGPT and is now posting on LinkedIn about its "AI-augmented singing journey."
The S&P is at 7,400. The mortgage market is fine. DoorDash still has a 28% take rate. And somewhere, a procurement manager is telling a SaaS vendor he could replace them with AI, while secretly praying they don't call his bluff.
Disclaimer: This is a rebuttal, not a prediction. If the 2028 Global Intelligence Crisis actually happens, please don't forward this back to us.
English
Milind Mehere retweetledi

Bayes’ theorem is probably the single most important thing any rational person can learn.
So many of our debates and disagreements that we shout about are because we don’t understand Bayes’ theorem or how human rationality often works.
Bayes’ theorem is named after the 18th-century Thomas Bayes, and essentially it’s a formula that asks: when you are presented with all of the evidence for something, how much should you believe it?
Bayes’ theorem teaches us that our beliefs are not fixed; they are probabilities. Our beliefs change as we weigh new evidence against our assumptions, or our priors. In other words, we all carry certain ideas about how the world works, and new evidence can challenge them.
For example, somebody might believe that smoking is safe, that stress causes mouth ulcers, or that human activity is unrelated to climate change. These are their priors, their starting points. They can be formed by our culture, our biases, or even incomplete information.
Now imagine a new study comes along that challenges one of your priors. A single study might not carry enough weight to overturn your existing beliefs. But as studies accumulate, eventually the scales may tip. At some point, your prior will become less and less plausible.
Bayes’ theorem argues that being rational is not about black and white. It’s not even about true or false. It’s about what is most reasonable based on the best available evidence. But for this to work, we need to be presented with as much high-quality data as possible. Without evidence—without belief-forming data—we are left only with our priors and biases. And those aren’t all that rational.

English










