
Dilip Thomas Ittyera
4.5K posts

Dilip Thomas Ittyera
@dilipti
Trust Layer for Enterprise Agents







Cursor CPO @sualehasif996 Turbopuffer CEO @Sirupsen Notion AI Engineer @akm_io Braintrust CEO @ankrgyl Great discussion on all things Semantic Search and honestly the hard parts of AI Engineering some spicy things said - “in my entire career nobody uses knowledge graphs” - “turbopuffer will run every sql query” - some cool details on how Cursor uses turbopuffer





One of the biggest questions when building AI Agents is how to build a long term moat. Beyond distribution, the AI Agent with the most context about the problem it’s trying to solve will have the greatest differentiation, and thus moat. Context is king for AI Agents. But when accumulating AI Agent context, the important thing is to not think about this as a static system. We can think of building context essentially as a flywheel, where improvements in each component improves the AI agent’s effectiveness over time, while also adding more data and workflow knowledge to maintain stickiness. The order of the flywheel doesn’t matter much, but the general direction will be: A better understanding of the domain and workflow (e.g., specific job instructions, knowledge of business processes) leads to better use of tools and interaction with other systems (e.g., invoking other tools, understanding how other software calls the agent). This, in turn, results in more relevant use and indexing of corporate data and knowledge (e.g., code repositories, contracts, customer data), which ensures more successful user outcomes. Successful outcomes drive increased activity, which builds user memory, and further deepens domain and workflow understanding—creating a virtuous cycle. The key is to optimize each step in the flywheel to drive the most amount of context over time. Each node in the flywheel has a set of inputs (and even sub-flywheels) that can be optimized. Getting better domain understanding through proprietary data leads to more relevance of the Agent’s decisions. More Agentic integrations will lead to better tool use, which is why AI interoperability is so important. More distribution and better end user experiences will drive more user activity. And so on. All the while, outside of this direct AI Agent flywheel, the models are getting better which means more and more capability gets packed into the model itself ensuring each of these steps is more effective. If you’re building AI Agents, it’s incredibly important to figure out how each of these steps can be tuned to drive as much context as possible. Every improvement here will lead to more differentiation and ultimately a bigger moat.

It's 2025 and most content is still written for humans instead of LLMs. 99.9% of attention is about to be LLM attention, not human attention. E.g. 99% of libraries still have docs that basically render to some pretty .html static pages assuming a human will click through them. In 2025 the docs should be a single your_project.md text file that is intended to go into the context window of an LLM. Repeat for everything.

We’re making Deep Research available as an endpoint to all developers through the Perplexity Sonar API to help people build their custom research agents and workflows! Excited to see what people are going to build using this!




