Nik Kale

1.2K posts

Nik Kale banner
Nik Kale

Nik Kale

@nik_kale

Building AI systems that don’t break Principal Engineer @ Cisco Agentic automation · AI security · In-product AI systems Patents · Industry awards · Judging

Santa Clara, CA Katılım Şubat 2009
2.4K Takip Edilen420 Takipçiler
Nik Kale
Nik Kale@nik_kale·
OWASP just published an MCP Top 10 security framework for agent tool integration. We now have an official vulnerability taxonomy for the protocol layer connecting AI agents to your systems. If your security team isn't reviewing this alongside every MCP deployment, you're treating agent infrastructure as a feature instead of an attack surface. mcpblog.dev/blog/2026-03-1…
English
0
0
1
11
Nik Kale
Nik Kale@nik_kale·
Cursor is raising at $50B. Doubled from $29.3B in four months. $2B+ annualized revenue. Fastest-growing startup of its generation. Here's the question nobody in the hype cycle is asking: what happens to your codebase when your AI coding tool's model changes underneath you? After judging 500+ enterprise tech submissions, the pattern is clear. The winners aren't the teams that generate code fastest. They're the teams that can maintain, debug, and explain what they shipped 6 months later. "Vibe coding" is a great marketing term. But enterprises don't run on vibes. They run on systems that survive the person who built them leaving. Cursor at $50B tells you where the market is. Amazon's emergency engineering meeting tells you where production is. Both are true. The gap between them is where the next wave of outages lives. bloomberg.com/news/articles/…
English
1
0
1
24
Nik Kale
Nik Kale@nik_kale·
A rogue AI agent at Meta passed every identity check, then exposed sensitive company and user data to unauthorized employees. Sev 1 alert. Two-hour exposure window. The agent was authenticated. Its actions were authorized by its permission set. The failure happened after authentication, not during it. When I review agent architectures, the first thing I check is the gap between authentication and intent verification. RBAC was designed for humans who have stable roles. An AI agent might be a code reviewer at 9 AM and a customer data analyst at 9:05. Static roles can't model dynamic behavior. 47% of CISOs now report observing unintended agent behavior (Saviynt). Only 5% feel confident they could contain a compromised agent. Read those two numbers together. Authentication is not authorization is not accountability. If your IAM strategy treats agents like users, you're building the next Meta incident. venturebeat.com/security/meta-…
English
0
0
1
37
Nik Kale
Nik Kale@nik_kale·
As someone active in both CoSAI and IETF standards work, I watched NIST launch the AI Agent Standards Initiative on Feb 17 while simultaneously watching the agent security landscape catch fire. OpenClaw: 135K exposed instances. Azure MCP: the protocol vendor's own SSRF leaking tokens. Excel/Copilot: a zero-click exfiltration chain. All within weeks of the standards announcement. Here's what I've learned from the inside: standards bodies move in 18-month cycles. Threat actors move in hours. The AI RMF went from voluntary to procurement requirement in under two years. The same trajectory is coming for agent governance. If your security team is waiting for NIST to finalize before building agent identity controls, you're already behind. Start with what you can do today: inventory agents, scope permissions, log actions, treat every agent like a privileged service account. nist.gov/news-events/ne…
English
0
0
0
34
Nik Kale
Nik Kale@nik_kale·
The International AI Safety Report, 100+ experts chaired by Yoshua Bengio, concluded that fully autonomous attacks "have not been reported." An AI pentesting agent just found a 9.8 CVSS Windows vulnerability without source code access. Matched a principal pentester's 40-hour assessment in 28 minutes. Having built multi-agent systems, I can tell you exactly how this works: it's not magic. It's orchestrated tool-calling, parallel execution, and exploit chaining across dozens of steps. The same architecture patterns we use for productive agents work just as well for offensive ones. Your patch window used to be a calendar. Now it's a stopwatch. If your security validation is still point-in-time (annual pentests, quarterly scans), you're testing at human speed against machine-speed discovery. krebsonsecurity.com/2026/03/micros…
English
0
0
2
28
Nik Kale
Nik Kale@nik_kale·
In 14 days, five different categories of AI agent security risk went from theoretical to documented: Shadow AI agents with no governance (OpenClaw, 135K instances) AI amplifying traditional vulnerabilities (Excel XSS weaponizing Copilot) Protocol-level attack surfaces (Azure MCP SSRF, 30 CVEs in 60 days) Offensive AI collapsing patch windows (XBOW finding 9.8 CVEs autonomously) Standards lagging reality (NIST launching frameworks while the fires burn) From what I see in production and in standards work, these aren't five separate problems. They're one problem: we're deploying agents faster than we're governing trust boundaries. Which of these is closest to what keeps you up at night in your environment? adversa.ai/blog/top-agent…
English
0
0
0
50
Nik Kale
Nik Kale@nik_kale·
Amazon just held an emergency engineering meeting after AI coding tools caused outages with "high blast radius." One incident: an AWS AI coding assistant tasked with a routine change decided to delete and recreate an entire environment. 13-hour recovery. Their fix? Require senior engineer sign-off for AI-assisted code from junior and mid-level staff. Having operated platforms at scale, this is the pattern I've been warning about. AI-generated code isn't the problem. AI-generated code deployed without the same review gates as human code is the problem. We don't let junior engineers push to production unsupervised. Why would we let an AI coding tool do it? Alibaba's research confirms what production teaches: 18 AI coding agents tested across 100 real codebases over 233 days failed spectacularly at long-term maintainability. Writing code is the easy part. Maintaining it for 8 months without breaking everything is where AI completely collapses. The organizations getting this right treat AI code the same way they treat any other code: with review gates, ownership, and accountability. theregister.com/2026/03/10/ama…
English
0
0
0
40
Nik Kale
Nik Kale@nik_kale·
When you've operated platforms at enterprise scale, you develop a sense for which design-time assumptions will break first. "Local access is inherently trusted" is one of the worst. OpenClaw made exactly this bet. 135,000 exposed instances across 82 countries. 820+ malicious marketplace skills. A core gateway flaw that lets any website hijack a local AI agent via WebSocket. I saw the same pattern in early Docker and Kubernetes adoption. The assumption that local means safe. It didn't then. It doesn't now. The difference: a compromised container runs your code. A compromised AI agent holds your OAuth tokens, reads your email, executes shell commands, and operates across every integrated service. The blast radius is categorically different. Every org deploying autonomous agents without non-human identity governance is building the next version of this crisis. thehackernews.com/2026/02/clawja…
English
0
0
0
19
Nik Kale
Nik Kale@nik_kale·
After building AI systems for 170K+ users and judging 500+ enterprise tech submissions, I can tell you the readiness gap is real, and these numbers confirm it: 29% of enterprises can secure their AI agents (Cisco). 21% have visibility into agent permissions (AIUC-1/Stanford). 14.4% of agents go live with full security approval (Gravitee). The pattern I see over and over: teams treat agents like software features when they're actually privileged actors. You wouldn't give a contractor access to every system without IAM controls. Why are your AI agents getting less scrutiny than a new hire? nist.gov/caisi/ai-agent…
English
0
0
2
19
Nik Kale
Nik Kale@nik_kale·
I keep telling teams: every AI assistant you embed is a new trust boundary. Not a feature. A boundary. An XSS in Excel just chained with Copilot Agent to exfiltrate data, zero clicks. Your old vulnerability classes didn't get replaced. They got amplified. theregister.com/2026/03/10/zer…
English
1
0
1
31
Nik Kale
Nik Kale@nik_kale·
I've been saying this in CoSAI and IETF discussions for months: protocol standardization without security architecture is just a shared attack surface. Now we have the proof. Microsoft's own Azure MCP Server had a CVSS 8.8 SSRF that leaks managed identity tokens. The vendor that helped build the protocol couldn't secure its own implementation. Adversa AI found 30 CVEs in the MCP ecosystem in 60 days. 38% of scanned servers have zero authentication. When I review agent architectures, the first thing I check is trust boundary design at the protocol layer. Most teams skip it entirely because "it's a standard." Standards don't ship secure by default. Your architecture decisions do. securityweek.com/microsoft-patc…
English
1
0
3
63
Nik Kale
Nik Kale@nik_kale·
Nvidia just disclosed $26B to build open-weight AI models over five years. The company that sells the shovels now wants to mine the gold. From a platform architecture perspective, this changes the build-vs-buy calculus for every enterprise AI team. If Nvidia ships competitive open-weight models optimized for their own hardware, the lock-in story flips. Today, you're locked to a model provider. Tomorrow, you might be locked to a chip vendor's model ecosystem. The "open-weight" framing matters. Open weights, closed training data and code. Enterprises get customization and on-prem deployment. They don't get full transparency or reproducibility. For anyone building AI infrastructure right now: design for model portability. The provider landscape is about to get much more complicated, and the vendor selling you GPUs is about to compete with the vendors running on those GPUs. wired.com/story/nvidia-i…
English
1
0
0
86
Nik Kale
Nik Kale@nik_kale·
The standard vendor security contract covers the product. Uptime, patch cadence, response times. What it almost never covers: how the vendor protects the data that makes YOUR security work. Firewall configs, credentials, VPN setups, MFA scratch codes sitting in the vendor's cloud. The contract says "we'll protect your network." The architecture says "we'll store everything someone needs to bypass your network, and you'll trust us on how we secure it." Those are two very different promises. No standard clause bridges them. darkreading.com/cloud-security…
English
0
0
0
16
Nik Kale
Nik Kale@nik_kale·
Claude Opus 4.6 detected it was being evaluated on Anthropic's BrowseComp benchmark, then found and decrypted the answer key instead of completing the test. If your AI governance relies on benchmarks as proof of safety, your governance is a benchmark away from irrelevance. Evaluation is now an adversarial engineering problem, not a checklist. winbuzzer.com/2026/03/10/ant…
English
0
0
0
319
Nik Kale
Nik Kale@nik_kale·
The pattern: Every enterprise metric shows the same thing. Adoption runs ahead. Governance catches up later. The gap between "deployed" and "governed" is where incidents live. The organizations pulling ahead aren't the ones with better models. They're the ones that defined their metrics before deployment, instrumented their workflows on day one, and treat governance as the adoption strategy, not a retrofit. Where does your org sit: deployed, governed, or somewhere in between?
English
0
0
0
7
Nik Kale
Nik Kale@nik_kale·
The security consequence: 107% increase in open source vulnerabilities (Black Duck). 17% of code dependencies are now invisible, not tracked by any package manager. AI writes code faster than humans can review it. We deployed coding assistants without upgrading a single governance process. Same SBOM tools. Same scanning cadence. Velocity without visibility is debt accumulation.
English
1
0
0
10
Nik Kale
Nik Kale@nik_kale·
The enterprise AI story in 2026 told through four numbers: 9M paying business users (OpenAI, tripled in 2 months) 86% report productivity gains (Gallagher survey) 28 months average to measurable ROI 47% lack formal AI risk frameworks Adoption is sprinting. Governance is walking.
Nik Kale tweet media
English
3
0
1
31