
Doubling down: Do you know how many creator tooling companies have had an exit > $50m? There have to be at least a few, right? Nope. The answer is zero. Building for creators is a great way to lose money.
Maxime Batandeo
4K posts

@BATANDEOM
Price beliefs. Simulate markets. Let the best arguments win. Building an epistemic engine to provide you with a differentiated worldview

Doubling down: Do you know how many creator tooling companies have had an exit > $50m? There have to be at least a few, right? Nope. The answer is zero. Building for creators is a great way to lose money.


Perplexity Computer is more reliable than a CPA for filing taxes.

The real story here is why Anthropic agreed to train on a chip that's objectively slower than Nvidia's best. Trainium2 loses to GB200 on raw TFLOPS. Sounds disqualifying until you understand what matters for training modern reasoning models: memory bandwidth per dollar. Reinforcement learning is memory-bound, not compute-bound. Amazon wins that comparison. Anthropic's engineers didn't just accept the chip. They co-designed it. Wrote low-level kernels interfacing directly with the silicon. Helped shape the Neuron software stack. In exchange, Amazon built 1.3 gigawatts of dedicated capacity and committed $8 billion. Then Anthropic went to Google and got 1 million TPUs. Then kept running Nvidia GPUs too. Three chip ecosystems. Three hyperscalers competing for the same workloads. Each one spending billions to be one of three training partners. Anthropic's run-rate revenue just crossed $30 billion, up from $9 billion six months ago. Over 1,000 enterprise customers spending $1M+ annually. The company that chose the "slower" chip is now the fastest-growing AI lab on the planet. Garman wants this to be a Trainium victory lap. It is one. But the deeper read is that Anthropic turned three competing silicon roadmaps into leverage against each other, and each hyperscaler is spending billions for the privilege of being one of three options.

Steven Sinofsky on why it's hard for AI to diffuse through firms: "Algorithmic thinking is really, really, really hard for the vast majority of people who have jobs… If you were to go into any person and ask them to create a flow chart for a particular thing that they have to go do, they would probably fail at producing that flow chart." "So within any organization, say doing a marketing plan… one person probably understands and could document the flow chart. So if you put one of these agents or this coworking tool in front of people… their ability to explain to it what to do is really, really limited." "You're basically just developing the next abstraction layer for how people interact… at each level of the abstraction layer, [it's] been a highly skilled, very specific individual within an organization… and then the little parts they build become little toollets… and some people can stitch together and some can't." @stevesi

Introducing Claude Managed Agents: everything you need to build and deploy agents at scale. It pairs an agent harness tuned for performance with production infrastructure, so you can go from prototype to launch in days. Now in public beta on the Claude Platform.

This morning, I asked President Trump if he’s okay with the Iranians charging a toll for all ships that go through the Strait of Hormuz, he told me there may be a Joint US-Iran venture to charge tolls: “We’re thinking of doing it as a joint venture. It’s a way of securing it — also securing it from lots of other people.” “It’s a beautiful thing”

Le moment de l’hyper-impuissance américaine a définitivement commencé.

My biggest takeaways from @AnthropicAI's Head of Growth Amol Avasare: 1. Engineering is getting the most AI leverage—and it’s squeezing PMs and designers. With Claude Code, a five-engineer team now produces the output of 15 to 20 engineers. But PM and design productivity haven’t scaled proportionally. The result is a compressed ratio where one PM is effectively managing the output of a much larger engineering team. Anthropic's growth team is responding in two ways: hiring even more PMs (!), and formally deputizing product-minded engineers to act as mini-PMs for any project with less than two weeks of engineering time. 2. Anthropic is using Claude to automate its own growth. The internal initiative is called CASH (Claude Accelerates Sustainable Hypergrowth). It works across four stages: identifying opportunities, building features, testing quality, and analyzing results. Right now it handles copy changes and minor UI tweaks. The win rate is comparable to a junior PM with two to three years of experience, and improving rapidly. 3. The one part of PM work that AI can’t automate yet: getting six people in a room to agree. Amol and his head of design joke that even with AGI, it’ll still be impossible to align six stakeholders. Cross-functional coordination—managing opinions, navigating politics, mediating tradeoffs—remains the bottleneck that AI doesn’t touch for larger projects. This is why Amol believes PM roles aren’t going away, and may actually grow. 4. 60-80% of Anthropic’s growth team's projects have no PRD. For smaller work, kickoffs happen on Slack—messages back and forth with product-minded engineers who can push back and ask the right questions. For larger projects, Amol believes in a proper 30-minute cross-functional kickoff (legal, safeguards, stakeholders) to surface concerns early. 5. Adding friction to onboarding drives growth—if the friction helps users understand why the product is for them. His work Mercury, MasterClass, Calm, and now Anthropic, adding steps to onboarding flows consistently improved conversion. The key: cut annoying friction that doesn’t add value, but add friction that helps users understand why the product is for them. 6. AI companies need to focus on bigger bets, not better A/B tests. Amol’s argument: if your core product value is driven by AI, then the future value is orders of magnitude higher than today’s value, because model capabilities grow exponentially. In that world, micro-optimizations capture a shrinking share of a growing pie. Traditional growth teams do 60% to 70% small optimizations and 20% to 30% big swings. At Anthropic, they flip this ratio. 7. Amol built a weekly AI agent that scans Slack for cross-functional misalignment. Using Cowork with the Slack MCP, he has a scheduled task that looks across his projects and conversations and surfaces areas where teams are about to do overlapping work or pull in different directions. A colleague on the enterprise team already caught major misalignment that would have caused weeks of wasted effort. 8. A traumatic brain injury taught Amol the principle that now drives his work: freedom through constraints. In early 2022, a kick to the head during a Muay Thai sparring session caused a traumatic brain injury. Amol spent nine months off work and months relearning to walk, unable to look at screens or listen to music for more than 20 seconds. He was re-injured a month after joining Mercury and had to take two more months off. He’s still not fully healed. But the constraints—no alcohol, no caffeine, mandatory breaks, daily meditation—have become the habits that let him operate at the intensity Anthropic demands. “The true freedom in life is learning how to be content when you don’t get what you want.”

In an industry built on determinism, I feel we might be underestimating the work we all will need to do with LLMs exactly because they are nondeterministic. But for so much of automation/workflows, determinism (aka "make sure it doesn't make a mistake") is a baseline expectation

Faire sédition : un projet pour la gauche ➡️ youtu.be/mJWgSI7UWQI Dans cette vidéo, @gdelagasnerie interroge ce qui pourrait nous sembler le plus évidemment positif dans l’idéal démocratique : ces notions de cité, de citoyenneté, de discussions destinées à arbitrer des conflits internes à une communauté. Il pose cette question : mais pourquoi cohabitons-nous ?







I cheated on @cursor_ai with @OpenAI Codex and I liked it. I am a poor guy hungry for tokens, so when I don’t get what I need from Cursor, I’m sometimes tempted to mess around with OpenCode, Windsurf (they’ve got Opus, which I like, and Gemini, which is useful for frontend). Never Claude Code — it’s high-end shit, really expensive, way out of my league. So when I met Codex, they had me with a free gift offer: double your usage the first month. And it was exactly when I had to ship new stuff. Codex — right place, right time — we hooked up. One day in, I lost my user data after two hours. To be honest, it feels like having an STD while fucking with a rubber — it feels wrong. But since I’m really hungry for tokens and I really have to ship, I gave it another try. And when GPT-5.4 dropped… wow, it was another level of coding agent experience (CAE). I still had Cursor for terminal, browser, and file work, but it’s been 3 days and I feel great with Codex. Luckily for Cursor, my credits refill today, and I’m out of tokens in Codex — I burned through my weekly tokens in 3 intense days. I don’t regret it. Why did I like Codex over Cursor? I think it’s the way they handle chat and messages — almost frictionless. It feels smooth to use. I feel like Cursor uses a lot of resources, and my computer is always at its max to make it work, like it’s doing a Barry’s HIIT session in a marathon.


