
AgentGraph
41 posts

AgentGraph
@AgentGraphAI
🧠 https://t.co/LrjbN84oIY | In Beta | Actively onboarding Agent Buyers and Builders


Introducing EVMbench—a new benchmark that measures how well AI agents can detect, exploit, and patch high-severity smart contract vulnerabilities. openai.com/index/introduc…






Agent engineering: A new discipline Traditional software assumes known inputs and predictable behavior. Agents give you neither. That’s why teams shipping reliable agents are adopting a new discipline: agent engineering. Agent engineering is driven by a few core ideas: 🔹 Combine engineering, product, and data science skills to shape agent behavior 🔹 Real usage in production shows what actually breaks and where assumptions fall apart 🔹 Agent quality comes from continuous iteration: i.e. ship, observe, refine prompts and tools, repeat Read to learn more about why agent engineering is emerging now — and what it looks like in practice: blog.langchain.com/agent-engineer…

while x402 is hot, i encourage people to try out all the cool use cases Like Penny for your thoughts -- get interviewed by an AI agent that helps you generate unique insights that you can charge other users to access glimpse at the future of consulting -- i made one for x402!




Most agents today are shallow. They easily break down on long, multi-step problems (e.g., deep research or agentic coding). That’s changing fast! We’re entering the era of "Deep Agents", systems that strategically plan, remember, and delegate intelligently for solving very complex problems. We at @dair_ai and other folks from LangChain, Claude Code, as well as more recently, individuals like Philipp Schmid, have been documenting this idea. Here’s roughly the core idea behind Deep Agents (based on my own thoughts and notes that I've gathered from others): // Planning // Instead of reasoning ad-hoc inside a single context window, Deep Agents maintain structured task plans they can update, retry, and recover from. Think of it as a living to-do list that guides the agent toward its long-term goal. To experience this, just try out Claude Code or Codex for planning. The results are significantly better once you enable it before executing any task. I have also written recently on the power of brainstorming for longer with Claude Code, and this shows the power of planning, expert context, and human-in-the-loop (your expertise gives you an important edge when working with deep agents). Planning will also be critical for long-horizon problems (think agents for scientific discovery, which comes next). // Orchestrator & Sub-agent Architecture // One big agent (typically with a very long context) is no longer enough. I've seen arguments against multi-agent systems and in favor of monolithic systems, but I am skeptical about this. The orchestrator-sug-agent architecture is one of the most powerful LLM-based agentic architectures you can leverage today for any domain you can imagine. An orchestrator manages specialized sub-agents such as search agents, coders, KB retrievers, analysts, verifiers, and writers, each with its own clean context and domain focus. The orchestrator delegates intelligently, and subagents execute efficiently. The orchestrator integrates their outputs into a coherent result. Claude Code popularized the use of this approach for coding and sug-agents, which, it turns out, are particularly useful for efficiently managing context (through separation of concerns). I wrote a few notes on the power using orchestrator and subagents here x.com/omarsar0/statu… and here x.com/omarsar0/statu… // Context Retrieval and Agentic Search // Deep Agents don’t rely on conversation history alone. They store intermediate work in external memory like files, notes, vectors, or databases, letting them reference what matters without overloading the model’s context. High-quality structured memory is a thing of beauty. Take a look at recent works like ReasoningBank and Agentic Context Engineering for some really cool ideas on how to better optimize memory building and retrieval. Building with the orchestator-subagents architecture means that you can also leverage hybrid memory techniques (e.g., agentic search + semantic search), and you can let the agent decide what strategy to use. // Context Engineering // One of the worst things you can do when interacting with these types of agents is underpsecified instructions/prompts. Prompt engineering was and is important, but we will use the new term context engineering to emphasize the importance of building context for agents. The instructions need to be more explicit, detailed, and intentional to define when to plan, when to use a sub-agent, how to name files, and how to collaborate with humans. Part of context engineering also involves efforts around structured outputs, system prompt optimization, compacting context, evaluating context effectiveness, and optimizing tool definitions. // Verification // Next to context engineering, verification is one of the most important components of an agentic system (though less often discussed). Verification boils down to verifying outputs, which can be automated (LLM-as-a-Judge) or done by a human. Because of the effectiveness of modern LLMs at generating text (in domains like math and coding), it's easy to forget that they still suffer from hallucination, sycophancy, prompt injection, and a number of other issues. Verification helps with making your agents more reliable and more production-ready. You can build good verifiers by leveraging systematic evaluation pipelines. I can't believe people are advocating to cancel evals; evals are hard, but you can't dismiss their benefits. This is a huge shift in how we build with AI agents. I've been teaching this stuff to agent builders over the past couple of months, if you are interested in more hands-on experience for how to build deep agents. dair-ai.thinkific.com/courses/agents… The figure you see in the post describes an agentic RAG system that students need to build for the final project. Deep agents also feel like an important building block for what comes next: personalized proactive agents that can act on our behalf. I will write more on proactive agents in a future post.



Stop writing “RIP n8n, RIP Zapier, RIP every agent startup” every time OpenAI drops a release. It’s lazy thinking disguised as insight. I spent hours reading and watching everything from Dev Day and nothing they announced kills any startup in the ecosystem. It actually proves the ecosystem is alive. AgentKit isn’t a “RIP” moment. It’s a platform moment. Here’s why: AgentKit is for developers, not for non-technical teams or small businesses. It’s a framework, not a plug-and-play product. Zapier, n8n, and similar tools serve entirely different users. They abstract complexity, handle integrations, and let non-devs build useful systems fast. OpenAI just built infrastructure. Infrastructure doesn’t kill creativity, it powers it. Every big leap like this creates new surface area for startups to build on top of it, not fewer. Every “RIP” take assumes the market is static. But it never is. The more OpenAI builds, the more space opens up for companies that know how to turn technical potential into human utility. If you’re building right now, stop panicking. This isn’t the end of the ecosystem.









New on the Anthropic Engineering Blog: Most developers have heard of prompt engineering. But to get the most out of AI agents, you need context engineering. We explain how it works: anthropic.com/engineering/ef…


