
Larry Ellison just exposed a fundamental truth about AI: Models like Claude, ChatGPT, Gemini and Grok are all trained on the same public internet data. When the input is the same, the output converges. The moat isn’t the model. It’s the data. Companies training on proprietary datasets have an advantage competitors cannot replicate. That’s why the “software apocalypse” narrative is wrong. And why $HIMS, $DUOL, $LMND, $META — companies that own unique, real-world data — will keep pulling ahead. Not because their models are better. But because their data is.

















