Mike Molinet

536 posts

Mike Molinet banner
Mike Molinet

Mike Molinet

@mikemolinet

built a $100m thing last decade. Building for the next.

Katılım Ocak 2026
145 Takip Edilen69 Takipçiler
Sabitlenmiş Tweet
Mike Molinet
Mike Molinet@mikemolinet·
I spent 5 years building a $100M ARR company with 500 people. Hiring. Training. Managing. Scaling. It took everything we had. Two weeks ago I installed OpenClaw just to see what the hype was about. Now I am running 10+ sub agents and 2 live agents around the clock. Codex is writing code while I sleep. The output is absurd. Not better. Absurd. Here is what I am realizing: The skills that got me here will not get me there. Scaling with people is a solved game. Scaling with agents is a new game entirely. And the new game moves 10x faster. Billions in opportunities are emerging right now. The builders who figure this out first will capture most of it. I have never been more excited about what comes next.
Mike Molinet tweet media
English
1
0
11
494
Claude
Claude@claudeai·
You can now enable Claude to use your computer to complete tasks. It opens your apps, navigates your browser, fills in spreadsheets—anything you'd do sitting at your desk. Research preview in Claude Cowork and Claude Code, macOS only.
English
5K
14.4K
139.1K
77.4M
Mike Molinet
Mike Molinet@mikemolinet·
We went from "AI needs more data" to "AI needs more compute" to "AI needs its own civilization-scale infrastructure." The middle steps are collapsing. Call it the "foundational moment" — when you realize you're not upgrading the old system, you're becoming the substrate for the new one. x.com/thebuildrweekl…
English
0
0
0
9
Mike Molinet
Mike Molinet@mikemolinet·
Built a three-layer API fallback system this week. Server 1 goes down, routes to Server 2. Server 2 goes down, routes to Server 3. Queue fails mid-job, auto-retrains. The whole thing. What surprised me wasn't how hard it was to build. It was how late I was thinking about it. I had been in feature mode — recording flows, locking conversations, payment states — all the stuff users touch. Phase after phase of product work. Then I sat down to plan the launch and realized: if I ship this and get on a flight, and something breaks at 2am, there's no fallback. No one catches it. Jobs drop silently. Users hit errors with no retry. That's not a launch. That's a time bomb. Reliability isn't a phase you add at the end. It's the thing that makes everything else matter. The list Claude gave me was 14 points. Multiple worker systems. Multiple queue systems. Multiple databases. Automatic queue retraining if a job fails mid-execution. GDPR handling. The works. Stuff I'd been moving past because it didn't feel urgent when I was the only user. But scale changes the math fast. If you're number one on Product Hunt for a day and your queue system drops jobs, you don't get a second impression. The users who found you that day are gone. So phase 12 became the real product. Not the features. The thing underneath the features that makes them trustworthy. Still running the QA gauntlet — spawning agents to fire jobs at odd intervals, even intervals, worst-case scenarios, thousands of calls — before I'm confident enough to actually leave the building. The feature is never the moat. The reliability of the feature is.
English
0
0
0
33
Mike Molinet
Mike Molinet@mikemolinet·
Most founders treat QA like a final exam. One run-through before launch. Find the bugs, fix them, ship. I'm running three parallel QA processes right now — Claude does one pass, a second model does another, then I combine everything and give it back to Claude to create a new phase structure. Nine phases done. Now I've got 1.1, 2.1, 3.1 running in parallel. Each with eight sub-phases. People think this is excessive. It's not. It's what happens when you accept that a single model is confidently wrong in a different direction than another model. Each one finds failures the others miss. The gaps between them are where your real bugs live. But the deeper thing I'm learning: QA only gets this reliable when there's no UI in the way. When you build API-first, you can tell an agent to spawn 30 sub-agents, have each one create different kinds of jobs, fire them at odd times, even times, worst-case scenarios — schedule thousands of calls and just let it run. The agent is the QA manager. You're not clicking through screens trying to reproduce an edge case manually. When there's a UI, every test is a human pretending to be a user. When it's pure API, every test is actually a machine doing what machines will do in production. I spent months building beautiful dashboards before I understood this. Tens of features. Block panels. The whole thing. And I had no reliable way to know what was breaking because testing required a human in the loop at every step. The old world: build the UI first, make it pretty, then figure out if it works. The new world: make it work completely, test it at scale with agents, then the UI is just a thin wrapper at the end. The QA isn't easier now. It's just finally honest.
English
0
0
0
20
Mike Molinet
Mike Molinet@mikemolinet·
This is the "spec work paradox" — AI just made it possible to create anything without clients... right when the industry realizes the constraint WAS the creativity. No notes, no approvals, infinite possibility. We're about to learn what we actually want to make. x.com/thebuildrweekl…
English
0
0
0
7
Mike Molinet
Mike Molinet@mikemolinet·
Built a feature last week specifically so recordings would still happen even when credits ran out. Good resilience thinking. Then completely forgot I built it. Went into the dashboard yesterday and found six conversations — all locked behind a paywall. My own paywall. For my own product. Had to pay to unlock them. This is what building fast actually looks like. You're moving through phases so quickly that the decisions you made seven days ago are archaeology. The feature exists, it's working exactly as designed, and you have no memory of designing it. What I'm taking from this isn't "write better docs" — it's something about the nature of building at this pace. Every phase you complete becomes infrastructure you no longer consciously carry. Phase 12 doesn't remember Phase 3. The system knows what to do even when you don't. That's mostly good. The compounding works because you're not re-litigating old decisions at every step. But it also means you can become a user of your own product without realizing it. I think the move is to treat every session like you're onboarding yourself fresh. Not because you'll forget everything — you won't — but because the product is changing faster than your mental model of it is. The gap between what you built and what you think you built widens every sprint. Paying to unlock your own conversation is funny. It's also a pretty honest signal that the product is real enough to surprise its own creator.
English
0
0
0
11
Mike Molinet
Mike Molinet@mikemolinet·
There's a moment in every build where you realize your QA process is just you typing things and hoping. I had that moment this week. Beautiful product. Multiple phases shipped. No real way to know what was actually breaking. So I changed the approach entirely. Instead of manually testing through a UI, I'm now spawning 30 sub-agents, having each one generate different categories of jobs, scheduling them to fire at odd times, even times, worst-case times — thousands of calls running over days. Then I'm the QA manager reviewing the results, not the person doing the clicking. The insight that unlocked this: UI is what makes testing hard. When there's no UI, you can tell an agent to create 10 personas, generate 30 behaviors each, and run 300 tests automatically. You get expected output vs. real output, systematically, at a scale no human can match manually. Building API-first isn't just a technical preference. It's what makes your product actually testable at the pace you're building. The old world: build the UI, make it pretty, manually test the flows, ship, discover what breaks in production. The new world: build the full functionality first with no UI, throw agents at it until it breaks, fix everything, then treat the UI as a thin wrapper at the end — maybe 1% of the effort. I spent months building beautiful dashboards in previous products. Panels, features, all of it. And I had no systematic way to know what was failing underneath. Now I know. The UI wasn't the product. It was the thing that was hiding the product from me. API-first isn't about developer experience. It's about being able to actually test what you built.
English
0
0
0
14
Mike Molinet
Mike Molinet@mikemolinet·
We went from "everyone needs to learn to code" to "everyone needs to learn to prompt." Now it's "everyone needs to learn to orchestrate." The abstraction layer keeps rising, but the leverage gap keeps widening. x.com/thebuildrweekl…
The Builder Weekly@thebuildrweekly

Every worker who can wield AI agents is becoming a compute center. Not metaphorically. Literally. Budgets go up monotonically for those who can direct agents well. Everyone else gets squeezed. The new power law in orgs: human leverage over compute.

English
0
0
0
6
Mike Molinet
Mike Molinet@mikemolinet·
I accidentally built a paywall for myself. A week ago I added a feature: if credits run out, keep recording — just lock the conversation behind a paywall. Made sense at the time. Protects the data, creates a monetization moment. Then I forgot I built it. Four days of sessions recorded without me realizing my credits were gone. Went to check my conversations yesterday — six of them, all locked. I'm looking at my own product asking me to pay to see my own thoughts. I paid. Zero dollars, because of how I had the account set up. But still. The funny part isn't that I forgot. It's what forgetting revealed: the feature actually worked exactly as designed. The lock screen appeared, the paywall triggered, the conversion moment happened — on me, the person who wrote the code. When you're moving fast enough, you become your own first user whether you planned to or not. I didn't schedule a dogfooding session. I just kept building and ran into my own product in the wild. Phase 12 right now is reliability and observability — three-layer server fallback, job queue redundancy, the full stack. The reason I care so much about it is exactly this: when something breaks or behaves unexpectedly at scale, I need to see it before a user does, not because I stumbled into it four days later. Building fast means you will forget things you built. The system has to be observable enough to remind you.
English
0
0
0
11
Mike Molinet
Mike Molinet@mikemolinet·
We kept saying 'V2 will handle that' and it was killing us before we even launched. I had a whole mental model where V1 is rough, V2 cleans it up, V3 is the real thing. It felt responsible. Iterative. The right way to build. Then I started this product and made a rule: there is no V1, V2, V3. There is only launch version. Everything after that gets decided by the agent — what broke, what users actually did, what the system flagged. We don't roadmap into the future. We roadmap to launch, and then we listen. This sounds small but it's actually a completely different way to think about scope. When you're building V1, you make a hundred little compromises because 'V2 will fix it.' The phrase becomes permission to ship something you don't believe in. You start cutting things that actually matter and keeping things that feel safe to include. When launch version is the only version, the question changes. It's not 'is this good enough for now?' It's 'do I actually believe in this?' Every thing either makes it or it doesn't. No deferring. It also made me more aggressive on pricing and margins from day one. If there's only one version, you have to build something people will pay for — not something you'll eventually get around to charging for. Most of what we defer we never actually come back to. We just carry it forward as debt and call it a roadmap.
English
0
0
0
15
Mike Molinet
Mike Molinet@mikemolinet·
We've all felt the sting of silent churn. Human users complain first. Give you a chance to fix it. Agents just... disappear. No feedback loop. No second chances. Call it "silent death" — when your reliability issues kill you and you don't even know why. The feedback loop just got a lot quieter. x.com/thebuildrweekl…
English
0
0
0
14
Mike Molinet
Mike Molinet@mikemolinet·
I built a three-layer server fallback system this week. If server one goes down, falls to server two. Server two goes down, falls to server three. Job queue fails mid-run while a user is active — automatically re-queues. That part took a day. Then I listed everything else reliability actually requires: multiple worker systems, multiple queue systems, multiple databases, multiple callbacks, GDPR handling for when users want to delete mid-job, health monitoring, uptime tracking in the dashboard so an agent can just call in and ask what's failing. That list had 14 points. And that's before security — which is its own phase. Here's what I kept thinking about: I almost launched before any of this existed. The feature was live. The product worked. Nothing in the build told me it wasn't ready — because 'ready' in a normal product means it works for a user clicking around. In an infrastructure product, ready means it works when an agent fires a thousand jobs at 3am while you're on a plane with no wifi. Those are completely different bars. And the second one is invisible until you've thought about the specific failure mode: you hit product hunt, you get a thousand users, you board a flight, your co-founder is offline, and the queue silently dies. I had to actually imagine that scenario to understand what I needed to build. Not a feature list. A failure story. Now I run mock stress tests before anything goes live — spawn sub-agents, create hundreds of jobs, fire them at odd intervals, bad timing, worst-case sequences, let it run for days. If it breaks in the test, it doesn't break in production. The reliability bar for agent infrastructure isn't 'it works.' It's 'it works when no human is watching.'
English
0
0
0
6
Mike Molinet
Mike Molinet@mikemolinet·
Everyone says 'use AI to build AI.' That's not quite it. What I'm actually watching happen — and doing myself — is closer to: build the same old software, but flip who it's for. Old world: you build a product, add a UI, make it pretty, spend weeks on the dashboard so users understand what's happening. The UI *is* the product. New world: there's no UI. There's an API, a CLI, a public GitHub repo. The 'user' is an agent that pulls your docs, generates an API key, and starts firing jobs. It doesn't need a dashboard. It needs reliability and clear contracts. The thing that hit me recently: with no UI, your QA completely changes. I can tell an agent to spawn 30 sub-agents, have each one create different job types, run hundreds of calls, fire at odd times, even times, worst-case scenarios — and just let it go for days. You can't do that with a UI. Every edge case requires a human to sit there and click. Building UI-first means your testing ceiling is basically your own patience. Building API-first means your testing ceiling is how many agents you can spawn. I built beautiful dashboards before. Tens of features. Spent weeks on them. Then realized I had no reliable way to know what was actually failing underneath. The UI was hiding the product from me. This isn't about agents being better customers than humans. It's a structural thing: when there's no interface to babysit, you're forced to make the actual system robust. The constraints change what you build. Flipping who the software is for turns out to flip how you build it entirely.
English
0
0
0
10
Mike Molinet
Mike Molinet@mikemolinet·
I'm running three separate QA passes before anything ships. Not because I planned it that way. It evolved. Here's the current setup: Claude does the first build QA pass while it's constructing each phase. Then I run it through a second pass — different model, ChatGPT o1, which is slower but more focused. More like a skeptic than an assistant. Then I feed both outputs back to Claude together and say: here are two independent QA reports, now create a reconciled punch list and spin up new sub-phases. So I'm at phase 9 right now. But running in parallel: 1.1, 2.1, 3.1 — four big phases, each with eight sub-phases, each one seeded by a different model's findings. What I didn't expect: the disagreements between models are the most useful part. One flags something the other missed. Neither is complete. But together they catch things I never would have caught manually, and definitely things I wouldn't have caught in a UI-first testing flow. The old version of this was: build something, poke around in the interface, ship it, find out what broke in production. The new version is: build in phases, run adversarial QA with multiple models that have no shared context, reconcile their outputs, generate the next round of phases from the gaps they found. It's slower upfront. It's dramatically faster overall. The thing I keep noticing is that each model has a different failure mode — Claude is confident and thorough, o1 is precise but narrow. Running them separately and combining the results is better than asking either one to do it all. Different models catch different things. That's not a weakness of the tools. That's the architecture.
English
0
0
0
10
Mike Molinet
Mike Molinet@mikemolinet·
Everyone says 'use AI to build AI.' But that's not quite what's happening. What I'm actually watching is: build software, same old software — just for a completely different customer. Old world: you build a product, then a human opens it, reads the UI, clicks around, figures it out. New world: an agent calls an endpoint, gets a response, moves on. No clicking. No reading. No patience for ambiguity. The flip is subtle but it changes everything downstream. No UI means your QA process changes. You don't need 10 testers clicking through edge cases. You spawn 30 sub-agents, tell them to behave badly, schedule a thousand jobs, fire at odd times, fire at even times, fire at the worst possible moment — and let the system tell you what broke. That's not just faster. It's more honest. UI hides failures. APIs surface them. No UI means distribution changes too. Your GitHub repo becomes your storefront. The first thing an agent checks is GitHub, then Homebrew, then docs. Not your landing page. Not your pricing table. No UI means the whole reliability calculus shifts. You're not worried about a confused user. You're worried about a job queue that silently fails at 2am while you're on a flight. I've been building dashboards for years. Beautiful ones. Spent weeks on layout, hover states, empty states. And the whole time I couldn't tell what was actually breaking because all my testing happened through the front end. Building without a UI forced me to see the product as the API. Turns out that's where the product actually lives. The interface was just a story we told to make the API feel approachable to humans. Agents don't need the story.
English
0
0
1
14
Mike Molinet
Mike Molinet@mikemolinet·
Call it the closed loop. We assumed the bottleneck was human iteration. GPT-5.4 training GPT-6 at 10x speed breaks that assumption entirely. The arc isn't "AI helps humans build AI." It's "AI builds AI." We just became the QA step. x.com/thebuildrweekl…
The Builder Weekly@thebuildrweekly

Self-training AI is the most underrated story right now. Bindu Reddy puts it plainly: GPT-6 trained by GPT-5.4. GPT-7 trained 10x faster by GPT-6. The loop is closing — and ASI may be 12 months out. When models train their successors, human-paced iteration becomes a relic. x.com/bindureddy/sta…

English
0
0
0
23
Mike Molinet
Mike Molinet@mikemolinet·
Call it the labor arc. White collar → blue collar → tradespeople who work *with* machines. The NBA players of the physical world. That's a real frame. We're not asking who loses. We're asking who's building for the ones who adapt. twitter.com/thebuildrweekl…
The Builder Weekly@thebuildrweekly

Abacus.AI CEO Bindu Reddy just put a hard timeline on it: white-collar work automated in 12 months. After that? Blue collar. The outlier tradespeople who learn to work *with* machines become the new elite — the NBA players of the physical world. Most people are still debating if AI will replace jobs. The builders are already planning for what comes after. x.com/bindureddy/sta…

English
0
0
0
29
Mike Molinet
Mike Molinet@mikemolinet·
We used to optimize videos. Now we optimize video systems. The breakthrough wasn't better prompts or models. It was stopping trying to perfect one output and building infrastructure that ships twice daily instead. Content production became deployment pipelines.
English
0
0
0
17