Unpopular opinion: your biggest AI security risk isn't a hacker. It's the AI agent you deployed last Tuesday with access to your entire customer database.
Google's A2A protocol lets agents delegate tasks to other agents autonomously.
Your governance layer has to follow the chain. Governing only the entry point leaves everything downstream ungoverned.
The "stay secure at speed" framing is right. The failure mode isn't teams moving too fast. It's that security tooling hasn't caught up to the new attack surface. When dev velocity is AI-driven, security posture needs to keep pace. Static checklists don't hold.
This compounds. The longer you've operated in one direction, the more confidently wrong LLMs will be about you. Pivoted companies are essentially invisible in AI-generated research for years.
the underrated part of AI in finance: not the models, the governance layer. who has access to query what, and whether you can reconstruct the audit trail after the fact. capability is outpacing compliance infrastructure by years
Honestly the bigger gap is distribution 101. Anyone can build now. Finding an audience who cares is the hard part, and it only gets harder as everyone builds faster.
The enforcement problem is the hard part. A human employee instinctively knows not to touch certain systems. Agents don't - they'll use any tool in reach. The Teamwork Graph is only useful if something can actually constrain what acts on it.
The permissioning point is the most underrated one. Enterprises aren't sweating cool interfaces - they're sweating agents with no audit trail, no clear owner, no RBAC. That's the gap we're working on at Verra. Governance ends up being the moat nobody expected.
Treating AI instructions as infrastructure is the right mental model. The corollary: they need the same governance overhead. Peer review, versioning, rollback. Most teams skip all of that when adopting coding assistants, then wonder why output quality degrades over time.
AI security is running this exact playbook right now. the guidance is there, OWASP LLM Top 10, NIST AI RMF. enforcement at the inference layer? almost nobody. biggest gap between paper and practice I've seen in years
Grok getting prompt-injected into authorizing a $175K transfer via hidden Morse code is a perfect case study. The model didn't fail — the infrastructure treating LLM output as financial authorization did. Output validation at the gateway layer is non-negotiable.
the underlying thing is usually structure. people who've engineered their work well don't accumulate assistant-worthy tasks. the mess that needs an assistant is often a symptom, not the base problem
85% of prompt injection attacks target agentic AI systems — not chatbots. Once an agent is compromised, it can exfiltrate data, misuse tools, and chain actions without human review. Governance at the agent layer isn't optional anymore.
SPDD is interesting but I'd push back on treating prompts as the primary abstraction. At the team level, what you really want is constraints that enforce standards regardless of what any individual engineer writes. The prompt shouldn't be the last line of defense.
CSA published new research this month: 53% of organizations have had AI agents exceed their intended permissions.
Not 53% are worried about it. 53% have already experienced it.
The scope violation problem is live in production.
the reasoning model support is the interesting bit. as models do more implicit chain-of-thought, standard logging starts missing the actual decision path. observing intent vs output is a meaningfully different problem than it was a year ago
You deployed AI agents. Great. Do you know what they're doing right now? Who they're calling? What data they're touching? Most teams don't. Verra fixes that. helloverra.com
Anthropic's pricing communication has always been a mess. The real issue is they're treating each product as its own pricing universe without making clear how they relate. Users shouldn't have to read three pages to know what they're paying for.
both solid lessons, but point 1 scales badly once you're running 10+ agents across teams. knowing which credentials each can reach - and auditing it post-incident - becomes its own infrastructure problem. that gap is what we built Verra around