
You just raised $5M to build someone else's moat. Here's what I mean. Most AI startups I meet are competing on the wrong axis. They're obsessing over model choice — Claude vs. GPT vs. Gemini. They're fine-tuning on domain data. They're building slick interfaces on top of state-of-the-art APIs. None of that is a moat. All of it can be replicated in weeks. The founders I'm most excited about are competing on a completely different dimension: time. Every session a user spends inside a well-architected AI system is a deposit. The system learns their editing patterns, their risk tolerance, their preferences — implicitly, without being told. After six months of daily use, that system knows how you work in ways you couldn't fully articulate yourself. That's not a product feature. That's a compounding asset. The architectural decision that separates these two worlds is simpler than most founders think: stateful vs. stateless agents. A stateless agent resets after every session — all that signal, discarded. A long-running agent retains it, learns from it, gets harder to replace every single week. The switching cost of a great stateless AI product is zero. The switching cost of a great stateful one, after two years, is enormous — not because of contracts, but because leaving means starting over. I've written a full framework on this — covering the four depths of personalisation, the three RL signals that drive compounding, and where the research frontier is heading. Link in the comments. One question for founders building in this space: are you designing for state accumulation from day one — or is that an afterthought?






















