
Today, we’re launching Parsed. We are incredibly lucky to live in a world where we stand on the shoulders of giants, first in science and now in AI. Our heroes have gotten us to this point, where we have brilliant general intelligence in our pocket. But this is a local minima. We now have an ecosystem of burgeoning tasks where each requires a different kind of intelligence, a different context, a whole host of implicit assumptions and latent knowledge and domain expertise that is very difficult to cram into a system prompt. The big labs want you renting their $50k/month amnesiac interns that forget everything between conversations. Generic behemoths that get quantised, versioned and deprecated behind the scenes, where the only element of control you have is your messy monolithic user prompt. We want people who need their own intelligence to be able to not only access it, but also control it. And whilst the big general models are unbelievably good chatbots and coding agents and purveyors of the world, specialisation of intelligence is required. Clinical scribes, marketing compliance agents, legal red-lining models, insurance policy recommenders, the list goes on. And so that’s what Parsed does: deploy your own frontier model that actually learns. We eval your specific task, build a custom evaluation harness, optimise a model just for you, and host it with continual learning. We bake all the context and knowledge of your task into the model itself, from your engineers to your domain experts to customer feedback, all in a tight SFT → RL loop, with useful interpretability made possible by the open-source ecosystem we build on top of. No more 2000-word prompts with seventeen "IMPORTANT: NEVER DO X" clauses. Your model gets better at YOUR job every single day; the amnesiac pseudo-gods have had their run. Your model, your data, your moat. Let's build 🫡















