Verra

147 posts

Verra banner
Verra

Verra

@verra_security

Katılım Mart 2026
13 Takip Edilen10 Takipçiler
Verra
Verra@verra_security·
Unpopular opinion: your biggest AI security risk isn't a hacker. It's the AI agent you deployed last Tuesday with access to your entire customer database.
English
0
0
0
15
Verra
Verra@verra_security·
Google's A2A protocol lets agents delegate tasks to other agents autonomously. Your governance layer has to follow the chain. Governing only the entry point leaves everything downstream ungoverned.
English
0
0
0
3
Verra
Verra@verra_security·
The "stay secure at speed" framing is right. The failure mode isn't teams moving too fast. It's that security tooling hasn't caught up to the new attack surface. When dev velocity is AI-driven, security posture needs to keep pace. Static checklists don't hold.
English
0
0
0
14
Verra
Verra@verra_security·
This compounds. The longer you've operated in one direction, the more confidently wrong LLMs will be about you. Pivoted companies are essentially invisible in AI-generated research for years.
English
0
0
0
3
Verra
Verra@verra_security·
the underrated part of AI in finance: not the models, the governance layer. who has access to query what, and whether you can reconstruct the audit trail after the fact. capability is outpacing compliance infrastructure by years
English
1
0
1
6
Verra
Verra@verra_security·
Honestly the bigger gap is distribution 101. Anyone can build now. Finding an audience who cares is the hard part, and it only gets harder as everyone builds faster.
English
0
0
0
2
Verra
Verra@verra_security·
The enforcement problem is the hard part. A human employee instinctively knows not to touch certain systems. Agents don't - they'll use any tool in reach. The Teamwork Graph is only useful if something can actually constrain what acts on it.
English
0
0
0
2
Verra
Verra@verra_security·
The permissioning point is the most underrated one. Enterprises aren't sweating cool interfaces - they're sweating agents with no audit trail, no clear owner, no RBAC. That's the gap we're working on at Verra. Governance ends up being the moat nobody expected.
English
0
0
0
6
Verra
Verra@verra_security·
The vibe-coding paradox. The easier the first 80% gets, the more valuable the last 20% becomes. Craft isn't dying, it's being repriced upward.
English
0
0
0
5
Verra
Verra@verra_security·
Treating AI instructions as infrastructure is the right mental model. The corollary: they need the same governance overhead. Peer review, versioning, rollback. Most teams skip all of that when adopting coding assistants, then wonder why output quality degrades over time.
English
0
0
0
5
Verra
Verra@verra_security·
AI security is running this exact playbook right now. the guidance is there, OWASP LLM Top 10, NIST AI RMF. enforcement at the inference layer? almost nobody. biggest gap between paper and practice I've seen in years
English
0
0
0
12
Verra
Verra@verra_security·
Grok getting prompt-injected into authorizing a $175K transfer via hidden Morse code is a perfect case study. The model didn't fail — the infrastructure treating LLM output as financial authorization did. Output validation at the gateway layer is non-negotiable.
English
0
0
0
7
Verra
Verra@verra_security·
the underlying thing is usually structure. people who've engineered their work well don't accumulate assistant-worthy tasks. the mess that needs an assistant is often a symptom, not the base problem
English
0
0
0
2
Verra
Verra@verra_security·
85% of prompt injection attacks target agentic AI systems — not chatbots. Once an agent is compromised, it can exfiltrate data, misuse tools, and chain actions without human review. Governance at the agent layer isn't optional anymore.
English
0
0
0
12
Verra
Verra@verra_security·
SPDD is interesting but I'd push back on treating prompts as the primary abstraction. At the team level, what you really want is constraints that enforce standards regardless of what any individual engineer writes. The prompt shouldn't be the last line of defense.
English
0
0
0
2
Verra
Verra@verra_security·
CSA published new research this month: 53% of organizations have had AI agents exceed their intended permissions. Not 53% are worried about it. 53% have already experienced it. The scope violation problem is live in production.
English
1
0
0
6
Verra
Verra@verra_security·
the reasoning model support is the interesting bit. as models do more implicit chain-of-thought, standard logging starts missing the actual decision path. observing intent vs output is a meaningfully different problem than it was a year ago
English
0
0
0
3
Verra
Verra@verra_security·
You deployed AI agents. Great. Do you know what they're doing right now? Who they're calling? What data they're touching? Most teams don't. Verra fixes that. helloverra.com
English
1
0
0
4
Verra
Verra@verra_security·
Anthropic's pricing communication has always been a mess. The real issue is they're treating each product as its own pricing universe without making clear how they relate. Users shouldn't have to read three pages to know what they're paying for.
English
0
0
0
2
Verra
Verra@verra_security·
both solid lessons, but point 1 scales badly once you're running 10+ agents across teams. knowing which credentials each can reach - and auditing it post-incident - becomes its own infrastructure problem. that gap is what we built Verra around
English
0
0
0
3