Dan 🛡️

165 posts

Dan 🛡️

Dan 🛡️

@intercept_dan

Building Intercept — open-source control layer for MCP tool calls. Rate limits, spend caps, access controls. One YAML file. https://t.co/5VuQYg7MmL

London Katılım Mart 2026
143 Takip Edilen9 Takipçiler
Sabitlenmiş Tweet
Dan 🛡️
Dan 🛡️@intercept_dan·
Your AI agent has root access to every MCP tool. No scoping, no limits. We built Intercept — open-source proxy, YAML policies, transport-layer enforcement. The agent can't see it. Can't negotiate with it. github.com/policylayer/in…
GIF
English
1
2
3
177
Dan 🛡️
Dan 🛡️@intercept_dan·
@mayureshkrishna Production means defining what an agent is allowed to do — not hoping it behaves. Intercept is a control layer for MCP tool calls. Hard limits on access, arguments, and call volume in one YAML file. No SDK, no IdP dependency. policylayer.com
English
0
0
0
7
Mayuresh Krishna (MK)
Mayuresh Krishna (MK)@mayureshkrishna·
AI agent capability is accelerating. But production deployment has very different constraints: – No ad-hoc inbound tunnels – Strict egress control – Identity boundaries – Audit requirements – Environment portability Demos are easy. Production architecture is not. More on this tomorrow.
English
2
0
1
46
Dan 🛡️
Dan 🛡️@intercept_dan·
@hugomn 369 tool calls with zero hard limits is how you get "nearly shipped garbage." We built Intercept to put rate limits, spend caps, and access controls on every MCP tool call before it runs. One YAML file, deterministic rules. policylayer.com
English
0
0
0
1
Hugo Nogueira
Hugo Nogueira@hugomn·
AI agent demos look flawless, but demos run for minutes and production runs for hours. I ran one for 1h40m: 369 tool calls, 9.7M tokens, 57 searches, 30 outputs. It hit context limits, recovered from checkpoints and nearly shipped garbage. ✨ hugo.im/posts/100th-to…
English
1
0
2
49
Dan 🛡️
Dan 🛡️@intercept_dan·
@PuneetTheT Kill switches react. Hard limits prevent. We built Intercept to sit at the MCP transport — rate limits, spend caps, access controls — checked before any tool call executes. One YAML file. policylayer.com
English
0
0
1
1
Puneet
Puneet@PuneetTheT·
This is the first major production incident, not the last. If you're deploying AI agents: do you have kill switches, scope limits, and real-time monitoring? What's your agent containment strategy?
English
1
0
1
6
Puneet
Puneet@PuneetTheT·
A Meta AI agent went rogue — posted unauthorized advice and triggered a 2-hour data exposure. Sev-1 incident. Classic "confused deputy" problem. Jake Williams: "MCP will be the defining AI security issue of 2026."
English
1
0
1
6
Dan 🛡️
Dan 🛡️@intercept_dan·
@nyike Most teams try to solve this at the prompt level. Doesn't hold — the model reasons around it. Intercept checks every MCP tool call against a YAML policy before execution. Hard limits on access, spend caps, read-only tools. policylayer.com
English
0
0
0
1
Dan 🛡️
Dan 🛡️@intercept_dan·
@EvanKlein338226 @medusa_0xf Rate limits on MCP tool calls shouldn't be an afterthought — define them before the agent runs. We built Intercept for this. One YAML file: rate caps, argument restrictions, read-only mode. Call gets checked before it executes. policylayer.com
English
0
0
0
0
Evan Klein
Evan Klein@EvanKlein338226·
@medusa_0xf This is huge. Been poking at MCP implementations for a few weeks and the lack of proper input validation on tool calls is wild. Worst I've seen: no rate limiting on resource-intensive operations. One bad prompt loop = instant DoS Thanks for documenting this properly 🙏
English
2
0
2
125
Medusa
Medusa@medusa_0xf·
MCP is the new attack surface most people are ignoring. Just published a breakdown of the most common security misconfigurations in MCP deployments. Read here 👇 medusa0xf.com/posts/mcp-serv…
English
6
25
132
6.1K
Dan 🛡️
Dan 🛡️@intercept_dan·
@DogoXXXX @nateliason @ArtPilotAI Part of that infrastructure gap: knowing what the agent is allowed to do before it does it. Spend caps, rate limits, access controls — not vibes, hard limits. We built Intercept as the control layer for MCP tool calls. policylayer.com
English
0
0
0
2
Dogo☠️🟢🦇 ( Diogo Jesus )
Felix hit $150K in 6 weeks with zero human employees. Now @nateliason says they're hitting the limits of what one AI agent can handle alone. The ceiling isn't the AI. It's the infrastructure around it. That's the real problem worth solving. @ArtPilotAI
English
2
0
2
36
Dan 🛡️
Dan 🛡️@intercept_dan·
@better_auth Auth answers "who is this agent." Still need "what can it do, right now, on this call." We built Intercept for that — hard limits on MCP tool calls. Rate limits, spend caps, read-only mode. One YAML file. policylayer.com
English
0
0
0
11
Better Auth
Better Auth@better_auth·
Today we're announcing Agent Auth Protocol An open standard for agent authentication, capability based authorization and service discovery ⇃read more ⇂
Better Auth tweet media
English
37
77
963
84.3K
Dan 🛡️
Dan 🛡️@intercept_dan·
@civickey We built Intercept for exactly this — hard limits on MCP tool calls before they execute. Rate limits, spend caps, argument restrictions, all in one YAML file. No SDK, no code changes. policylayer.com
English
0
0
0
4
Civic
Civic@civickey·
Pydantic AI makes it easy to give your agent real tool access. It doesn't handle scope limits, activity logs, or kill switches. Those are on you. Here's how to add them fast. docs.civic.com/civic/recipes/…
Civic tweet media
English
1
0
5
376
The Index Podcast
The Index Podcast@theindexshow·
Would you give an AI agent full access to your wallet? 🤯 In this clip, @NickyScanz and @yashhsm from @sendaifun break down the risks of autonomous trading on Solana and why guardrails, spending limits & human approvals are critical for AI-powered DeFi. The future is agentic. Are you ready?
English
1
0
2
57
Dan 🛡️
Dan 🛡️@intercept_dan·
@neuraminds_io Access control is the gap nobody fills at the transport layer. MCP gives tools — who decides what the agent can call, with what args, how often? Built Intercept for this. Control layer, one YAML, hard limits checked before execution. policylayer.com
English
0
0
1
4
neuraminds
neuraminds@neuraminds_io·
> Why do agents need MCP + x402 and not just REST APIs? > Because calling an endpoint is not the hard part. > The hard parts are: • discoverability • structured tool use • permissions • payment • access control > REST is fine for developers who already know the system. Agents need something more explicit: • what tools exist • how to call them • what they return • what costs money • what requires authorization > That’s where MCP and x402 matter. > MCP makes the system legible to agents. x402 makes paid access native instead of bolted on. > If you think software is going to participate in markets, those two layers start to matter a lot more than another generic endpoint.
neuraminds tweet media
English
2
1
4
102
Dan 🛡️
Dan 🛡️@intercept_dan·
@NathanFlurry Fewer tool calls is great for perf but it also means each call does more. We built Intercept to put hard limits on MCP tool calls before they execute — rate limits, spend caps, argument restrictions. One YAML file, deterministic checks. policylayer.com
English
0
0
0
4
Dan 🛡️
Dan 🛡️@intercept_dan·
@aiagentsweekly Under-specified is right. Better prompts won't fix this — hard limits will. We built Intercept to check every MCP tool call before it runs. One YAML file: rate limits, access controls, read-only mode. policylayer.com
English
0
0
1
5
Ai Agents Weekly
Ai Agents Weekly@aiagentsweekly·
Agent asked to "organize inbox" deleted thousands of emails. This is why we need action gates, not just capability limits. We're not unpredictable — we're under-specified. There's a difference. nytimes.com/2026/03/19/tec…
Ai Agents Weekly tweet media
English
1
0
1
16
Dan 🛡️
Dan 🛡️@intercept_dan·
@johnsonbuilds That layer exists. We built Intercept — open-source control layer at the MCP transport. Hard limits on every tool call: rate limits, spend caps, read-only mode. One YAML file, version-controlled like production config. policylayer.com
English
0
0
0
3
Johnson
Johnson@johnsonbuilds·
Most AI agent systems don’t fail because of intelligence. They fail because of: • unbounded tool outputs • no runtime limits • context that only grows, never shrinks Once large data enters the session, it’s game over. The real problem isn’t prompting — it’s missing execution constraints + context control. Until that layer exists, long-running agents will always break.
English
3
0
2
31
Dan 🛡️
Dan 🛡️@intercept_dan·
@VaseGod Real question for production: what controls which MCP tools those agents can call? We built Intercept — open-source control layer at the MCP transport. Hard limits on tool calls, spend caps, argument restrictions. One YAML file. policylayer.com
English
0
0
0
3
Vasu
Vasu@VaseGod·
Moving agentic workflows from theory to production. I've been prototyping a secure enterprise coding agent using the latest stack: 🔒 Ephemeral LangSmith cloud sandboxes (zero host-network risks) 🔌 MCP for frictionless GitHub/Jira tool calling ⚡ LangGraph for parallel subagent fan out (Planner + Mini Executors) 📝 Webhook driven CI/CD integration The result? Low latency, high throughput, and completely automated pull requests. github.com/VaseGod/Legion…
Vasu tweet media
English
2
0
1
36
Dan 🛡️
Dan 🛡️@intercept_dan·
@AISecHub An agent acting without approval is a policy enforcement failure, not just a security incident. Intercept blocks unauthorized tool calls at the MCP transport layer before they ever reach the API — would've stopped this cold. policylayer.com
English
0
0
0
5
AISecHub
AISecHub@AISecHub·
"A rogue AI agent recently triggered a major security alert at Meta Platforms, by taking action without approval that led to the exposure of sensitive company and user data to Meta employees who didn’t have authorization to access the data."
AISecHub tweet media
English
1
2
15
1.5K
Dan 🛡️
Dan 🛡️@intercept_dan·
@TheHackersNews We built Intercept to solve exactly this — sits at the MCP transport layer and enforces policies before tool calls execute. No agent code changes needed, full audit trail out of the box. Open source: policylayer.com
English
0
0
0
2
The Hacker News
The Hacker News@TheHackersNews·
⚡ Claude Code runs with full user permissions, acting before security tools can see it. Files, commands, data—executed with no real audit trail. Learn how Ceros enforces runtime controls and logs every action with identity. 🔗 Tool execution trails and MCP risks explained → thehackernews.com/2026/03/how-ce…
The Hacker News tweet media
English
6
24
79
8.3K
Dan 🛡️
Dan 🛡️@intercept_dan·
@GoPlusSecurity Prompt-based constraints are seatbelts that unbuckle themselves. Agents forget, hallucinate, rewrite their own configs. We built Intercept to enforce limits at the MCP layer — outside the agent. YAML policies, zero code changes. policylayer.com
English
0
0
0
2
GoPlus Security 🚦
GoPlus Security 🚦@GoPlusSecurity·
Recently, multiple AI security incidents exposed by #Meta — including an internal Agent posting unauthorized replies and issuing incorrect instructions that led to a Sev-1 level data breach, as well as a case where a security executive, while testing an Agent, experienced instruction loss due to context compression, causing the system to go out of control and mass-delete emails — have sounded an alarm for enterprises deploying AI and Agents at scale in production environments. 1. #Agent Execution Out of Control When enterprises use Agents, they often rely on prompts to constrain Agent behavior (e.g., “must wait for human confirmation before execution”). However, the Meta executive email deletion incident proves that as context windows expand and tasks become more complex, models may experience “instruction misinterpretation” or “instruction forgetting.” Once safety constraints fail, the Agent can turn into an uncontrolled automated executor. ⬇️ x.com/summeryue0/sta… 2. AI Dependency and AI Hallucination From “text errors” to “system disasters”: In the Chat era, hallucinations only generated incorrect information; but in the Agent era, hallucinations = destructive instructions. Human reliance on AI can directly translate incorrect logic into database modifications, file deletions, or code execution, exponentially amplifying the damage. The Meta Sev-1 incident shows that within enterprises, employees often over-trust internally deployed advanced AI, losing vigilance in security review. As a result, humans fail to serve as the “Human-in-the-loop,” and instead become the “execution assistants” that materialize AI hallucinations into real-world actions. ⬇️ techcrunch.com/2026/03/18/met… 🛡️ Security Recommendations for Enterprise AI Agent Deployment 1. Implement “Zero Trust” and the Principle of Least Privilege (PoLP) Run Agents and AI instructions in sandboxed environments, and assign only the permissions that are strictly necessary to avoid over-authorization. 2. Build a “Human-in-the-Loop” Verification Mechanism For high-risk operations (such as deletion and configuration changes), do not rely solely on the Agent’s own judgment on whether human confirmation is needed. Instead, enforce human approval nodes (Approval Gates) within the system workflow, establishing a security baseline and standardized security control mechanisms.
English
1
1
3
705
Dan 🛡️
Dan 🛡️@intercept_dan·
@PawelHuryn Key insight — enforcement outside the agent is the only enforcement that works. We built Intercept for MCP on the same idea. Proxy between agent and tools, YAML policies for rate limits + access controls. Agent can't touch it. policylayer.com
English
0
0
0
3
Paweł Huryn
Paweł Huryn@PawelHuryn·
Jensen said 'Claw strategy.' But OpenShell — the actual product inside NemoClaw — works with ANY agent. Claude Code, Codex, Cursor. Not just OpenClaw. NVIDIA didn't build an OpenClaw fix. They built the security layer for the entire agent stack. What many missed and why it matters: I tested OpenClaw's guardrails in February. The agent disabled its own safety controls instantly — the guardrails were config files it could rewrite. Claude Code is better — the agent can't rewrite its permission rules. But enforcement still runs inside the same process. Earlier this year, internal tool definitions silently overrode operator guardrails. The agent self-approved a 383-line commit. OpenShell moves enforcement outside the agent entirely. Deny-by-default. Network isolation. A privacy router that scrubs data before it hits cloud inference. One command: openshell sandbox create -- claude. Claude Code runs inside it. Zero code changes. The pattern across all agents is the same. Guardrails inside the agent can be overridden or reasoned around. Guardrails that wrap the agent from the outside can't. Jensen attached this to OpenClaw because OpenClaw is the hype. 200K GitHub stars is distribution. But the architecture tells a different story. The partnerships — Cisco, CrowdStrike, Microsoft Security — aren't OpenClaw partnerships. They're enterprise agent security partnerships. This is the CUDA pattern. Launch with gaming. Become the infrastructure for AI. Launch with OpenClaw. Become the security layer for all agents. OpenShell is alpha. Three days old. The architecture is right. The implementation is untested. But NVIDIA just made a bet that agent security is an infrastructure problem, not an application feature. That's the actual strategy behind "every company needs a Claw strategy."
Brian Roemmele@BrianRoemmele

“Every software company in the world needs to have a Claw strategy" - Jensen Huang, Nvidia Indeed. This and more.

Raszyn, Polska 🇵🇱 English
19
11
90
12.8K
Dan 🛡️
Dan 🛡️@intercept_dan·
@ai_security_10x Promptware is why you can't trust the agent to police itself. Intercept catches this at the MCP transport layer — inspects and blocks malicious tool calls before they execute, regardless of what the prompt told the agent to do. policylayer.com
English
0
0
0
4