g retweetledi
g
654 posts

g
@honeybadgerhack
infosec, breaky-breaky, AppSec, world traveler, surfer of hydrofoils, bjj brown belt. https://t.co/nJaGqsN0Fe
Katılım Nisan 2019
201 Takip Edilen70 Takipçiler


@ZackKorman This is why I built assury.ai it happened to me last year
English

Now imagine the damage a threat actor can do by prompt injecting your AI agent.
JER@lifeof_jer
English

@ZackKorman Hey Zack I like this approach. I built assury.ai and have some experimentation on the detection side. I would love to chat about this. Assury is execution layer governance not detection but we have some really interesting telemetry.
English
g retweetledi

MoCoP is the first production platform built specifically for AI agent runtime governance at the execution boundary.
✓ Intercepts tool calls before execution
✓ Session-level cumulative risk scoring
✓ OPA/Rego deterministic policy — no LLM in the governance path
✓ Credential starvation — agents never hold tool credentials directly
One deployment. No SDK.
English

API gateway adoption is accelerating in AI teams.
So is the false sense of security it creates.
New post on the architectural gap nobody's drawn yet:
assury.ai/blog/why-api-g…
English

@honeybadgerhack Post-inference is exactly the blind spot. When an agent acts on an MCP tool response, it's treating external content as trusted execution commands. If that layer goes unmonitored, prompt injection flows straight to the tool.
English

Lakera, LLM Guard, Prompt Security: they stop bad inputs.
They miss everything after.
Post-inference is ungoverned. That's where agents do real damage.
Free dev tier → assury.ai
#MCP #AIAgentSecurity
English

A new arXiv spec defines what AI agent runtime governance must include.
We contributed to it. MoCoP is the reference implementation.
arxiv.org/abs/2602.09433
English

Three billion-dollar security categories.
Zero of them govern what AI agents actually do.
New post on why identity, prompt security, and API gateways all miss the same layer:
assury.ai/blog/ai-securi…
English

