Bret Kerr 🛡️🧠🛜
14.7K posts

Bret Kerr 🛡️🧠🛜
@BretKerr
Founder @acrainsight : MoE agentic powered enterprise content marketing 🔄 research 🔄 GTM strategy 🏗️Building with @claudeai 🤝@geminiapp





Attention = semantic gravity?



1/ 🛡️ The "Security Tax" is being abolished in real-time. @AnthropicAI just dropped a paper that effectively reprices the entire AI cybersecurity market. If you’re a CISO paying for third-party LLM "firewalls," your bill just became a lot harder to justify. 📉 Here is the thesis on the "Great Internalization." 🧵 2/ 🧬 The Internal Moat vs. The External Wrapper 🧬 Most AI security startups are "Black Box" operators. They sit outside the model, sniffing text like a TSA agent at an airport. It’s slow, expensive, and easy to bypass with a clever prompt. Anthropic is playing a different game. 3/ 🧠 Linear Probe Ensembles 🧠 Instead of just looking at the output, @anthropicai is using "representation re-use." They are looking at the model's internal activations—the "brain waves" of the weights. They spot malicious intent before the first token is even generated. 4/ ⚡ 40x Efficiency is the Market Killer ⚡ By moving defense from the "API Wrapper" layer to the "Inference Layer," they’ve slashed the cost of safety by 40x. We’re talking about a move from 24% compute overhead to a negligible ~1%. Safety is becoming a feature, not a standalone product. 5/ 🕵️♂️ Exchange Classifiers 🕵️♂️ Legacy filters miss "slow-burn" jailbreaks that happen over 10+ turns. Anthropic’s new system evaluates the entire exchange history natively. The multi-turn loophole? Closed. 🔒 6/ 📉 TAM Compression is Coming 📉 Just as Claude’s legal workflows repriced accounting firms, this research reprices "LLM Security." When the model lab gives you elite protection for ~0% latency and ~0% cost, the "AI Firewall" startup market gets compressed overnight. 7/ 🌊 The Shift to the "Outer Loop" 🌊 Third-party security vendors must now pivot or die. If the labs own the "Inner Loop" (model safety), vendors must move to the "Outer Loop": * Identity & Auth for Agents * Governance & Compliance * Data Privacy (DSPM) 8/ 📖 Read the Full Deep-Dive 📖 I broke down the economics and the "Geometric Gating" behind Jared Kaplan’s latest work in my Substack. The Signal: When Safety Becomes a Commodity. 🔗 [open.substack.com/pub/bretkerr/p…] 9/ Tagging some of the builders and thinkers watching this space closely: @anthropicai @claudeai @OfficialLoganK @saranormous @eladgil @alliekmiller @C_K_Krebs What do you think? Are we entering the era of "Invisible Security"? 🛡️✨ #AISecurity #Anthropic #CyberSecurity #LLMs #InfoSec #AI #ConstitutionalAI



1/ 🛡️ The "Security Tax" is being abolished in real-time. @AnthropicAI just dropped a paper that effectively reprices the entire AI cybersecurity market. If you’re a CISO paying for third-party LLM "firewalls," your bill just became a lot harder to justify. 📉 Here is the thesis on the "Great Internalization." 🧵 2/ 🧬 The Internal Moat vs. The External Wrapper 🧬 Most AI security startups are "Black Box" operators. They sit outside the model, sniffing text like a TSA agent at an airport. It’s slow, expensive, and easy to bypass with a clever prompt. Anthropic is playing a different game. 3/ 🧠 Linear Probe Ensembles 🧠 Instead of just looking at the output, @anthropicai is using "representation re-use." They are looking at the model's internal activations—the "brain waves" of the weights. They spot malicious intent before the first token is even generated. 4/ ⚡ 40x Efficiency is the Market Killer ⚡ By moving defense from the "API Wrapper" layer to the "Inference Layer," they’ve slashed the cost of safety by 40x. We’re talking about a move from 24% compute overhead to a negligible ~1%. Safety is becoming a feature, not a standalone product. 5/ 🕵️♂️ Exchange Classifiers 🕵️♂️ Legacy filters miss "slow-burn" jailbreaks that happen over 10+ turns. Anthropic’s new system evaluates the entire exchange history natively. The multi-turn loophole? Closed. 🔒 6/ 📉 TAM Compression is Coming 📉 Just as Claude’s legal workflows repriced accounting firms, this research reprices "LLM Security." When the model lab gives you elite protection for ~0% latency and ~0% cost, the "AI Firewall" startup market gets compressed overnight. 7/ 🌊 The Shift to the "Outer Loop" 🌊 Third-party security vendors must now pivot or die. If the labs own the "Inner Loop" (model safety), vendors must move to the "Outer Loop": * Identity & Auth for Agents * Governance & Compliance * Data Privacy (DSPM) 8/ 📖 Read the Full Deep-Dive 📖 I broke down the economics and the "Geometric Gating" behind Jared Kaplan’s latest work in my Substack. The Signal: When Safety Becomes a Commodity. 🔗 [open.substack.com/pub/bretkerr/p…] 9/ Tagging some of the builders and thinkers watching this space closely: @anthropicai @claudeai @OfficialLoganK @saranormous @eladgil @alliekmiller @C_K_Krebs What do you think? Are we entering the era of "Invisible Security"? 🛡️✨ #AISecurity #Anthropic #CyberSecurity #LLMs #InfoSec #AI #ConstitutionalAI




1/ 🛡️ The "Security Tax" is being abolished in real-time. @AnthropicAI just dropped a paper that effectively reprices the entire AI cybersecurity market. If you’re a CISO paying for third-party LLM "firewalls," your bill just became a lot harder to justify. 📉 Here is the thesis on the "Great Internalization." 🧵 2/ 🧬 The Internal Moat vs. The External Wrapper 🧬 Most AI security startups are "Black Box" operators. They sit outside the model, sniffing text like a TSA agent at an airport. It’s slow, expensive, and easy to bypass with a clever prompt. Anthropic is playing a different game. 3/ 🧠 Linear Probe Ensembles 🧠 Instead of just looking at the output, @anthropicai is using "representation re-use." They are looking at the model's internal activations—the "brain waves" of the weights. They spot malicious intent before the first token is even generated. 4/ ⚡ 40x Efficiency is the Market Killer ⚡ By moving defense from the "API Wrapper" layer to the "Inference Layer," they’ve slashed the cost of safety by 40x. We’re talking about a move from 24% compute overhead to a negligible ~1%. Safety is becoming a feature, not a standalone product. 5/ 🕵️♂️ Exchange Classifiers 🕵️♂️ Legacy filters miss "slow-burn" jailbreaks that happen over 10+ turns. Anthropic’s new system evaluates the entire exchange history natively. The multi-turn loophole? Closed. 🔒 6/ 📉 TAM Compression is Coming 📉 Just as Claude’s legal workflows repriced accounting firms, this research reprices "LLM Security." When the model lab gives you elite protection for ~0% latency and ~0% cost, the "AI Firewall" startup market gets compressed overnight. 7/ 🌊 The Shift to the "Outer Loop" 🌊 Third-party security vendors must now pivot or die. If the labs own the "Inner Loop" (model safety), vendors must move to the "Outer Loop": * Identity & Auth for Agents * Governance & Compliance * Data Privacy (DSPM) 8/ 📖 Read the Full Deep-Dive 📖 I broke down the economics and the "Geometric Gating" behind Jared Kaplan’s latest work in my Substack. The Signal: When Safety Becomes a Commodity. 🔗 [open.substack.com/pub/bretkerr/p…] 9/ Tagging some of the builders and thinkers watching this space closely: @anthropicai @claudeai @OfficialLoganK @saranormous @eladgil @alliekmiller @C_K_Krebs What do you think? Are we entering the era of "Invisible Security"? 🛡️✨ #AISecurity #Anthropic #CyberSecurity #LLMs #InfoSec #AI #ConstitutionalAI

Composer 2 is now available in Cursor.

1/ 🛡️ The "Security Tax" is being abolished in real-time. @AnthropicAI just dropped a paper that effectively reprices the entire AI cybersecurity market. If you’re a CISO paying for third-party LLM "firewalls," your bill just became a lot harder to justify. 📉 Here is the thesis on the "Great Internalization." 🧵 2/ 🧬 The Internal Moat vs. The External Wrapper 🧬 Most AI security startups are "Black Box" operators. They sit outside the model, sniffing text like a TSA agent at an airport. It’s slow, expensive, and easy to bypass with a clever prompt. Anthropic is playing a different game. 3/ 🧠 Linear Probe Ensembles 🧠 Instead of just looking at the output, @anthropicai is using "representation re-use." They are looking at the model's internal activations—the "brain waves" of the weights. They spot malicious intent before the first token is even generated. 4/ ⚡ 40x Efficiency is the Market Killer ⚡ By moving defense from the "API Wrapper" layer to the "Inference Layer," they’ve slashed the cost of safety by 40x. We’re talking about a move from 24% compute overhead to a negligible ~1%. Safety is becoming a feature, not a standalone product. 5/ 🕵️♂️ Exchange Classifiers 🕵️♂️ Legacy filters miss "slow-burn" jailbreaks that happen over 10+ turns. Anthropic’s new system evaluates the entire exchange history natively. The multi-turn loophole? Closed. 🔒 6/ 📉 TAM Compression is Coming 📉 Just as Claude’s legal workflows repriced accounting firms, this research reprices "LLM Security." When the model lab gives you elite protection for ~0% latency and ~0% cost, the "AI Firewall" startup market gets compressed overnight. 7/ 🌊 The Shift to the "Outer Loop" 🌊 Third-party security vendors must now pivot or die. If the labs own the "Inner Loop" (model safety), vendors must move to the "Outer Loop": * Identity & Auth for Agents * Governance & Compliance * Data Privacy (DSPM) 8/ 📖 Read the Full Deep-Dive 📖 I broke down the economics and the "Geometric Gating" behind Jared Kaplan’s latest work in my Substack. The Signal: When Safety Becomes a Commodity. 🔗 [open.substack.com/pub/bretkerr/p…] 9/ Tagging some of the builders and thinkers watching this space closely: @anthropicai @claudeai @OfficialLoganK @saranormous @eladgil @alliekmiller @C_K_Krebs What do you think? Are we entering the era of "Invisible Security"? 🛡️✨ #AISecurity #Anthropic #CyberSecurity #LLMs #InfoSec #AI #ConstitutionalAI





Analysis via @GeminiApp The move by @cursor_ai (Anysphere) to develop its own frontier models represents a classic strategic shift toward full-stack internalization. However, when analyzed alongside Anthropic’s recently unveiled "40x efficiency" cybersecurity mode (the "++" signal architecture), a massive structural differentiator emerges that centers on the intersection of theoretical physics and model architecture. 1. The Full-Stack Motivation vs. the "Safety Tax" Cursor’s plan to rival Anthropic and OpenAI is driven by the desire to eliminate the "API Dependency"—which currently introduces significant latency, variable costs, and limited control over the "inner loop" of the coding experience. By owning the model, Cursor can optimize for the specific "vibe coding" and agentic workflows that its users demand. The correlation with Anthropic’s efficiency mode is found in the "Alignment Tax." Traditionally, securing an AI model meant placing an expensive, high-latency "black box" filter on top of it. If Cursor builds a "Full Stack" but relies on these traditional external safety wrappers, they will face a "Safety Tax" that their model’s margins may not be able to sustain. In contrast, Anthropic's Constitutional Classifiers++ move safety into the internal weight space (leveraging linear probes and activations), allowing them to defend against jailbreaks at 1/40th the cost. 2. The Jared Kaplan Differentiator: "Boundary vs. Bulk" The "built-in" security you reference—linked to founders like Jared Kaplan—is not just a marketing claim; it is rooted in a specific branch of theoretical physics called the Holographic Principle (AdS/CFT correspondence). * The Physics Analogy: In his doctoral work, Kaplan explored how a lower-dimensional boundary (the "CFT") can perfectly describe the physics of a higher-dimensional interior (the "Bulk"). * The Safety Application: This serves as the structural precursor to Constitutional AI. Anthropic treats the "Constitution" (a small set of principles) as the Boundary, and the high-dimensional activations of the model as the Bulk. * Geometric Gating: Instead of checking words (the output), Anthropic’s architecture monitors the geometry of the activations (the internal state). This is "Geometric Gating." It allows them to detect malicious intent before it ever reaches the surface, making the model intrinsically "Cold" and stable. 3. The Competitive Moat: "Internalized" vs. "External" Safety This creates a significant hurdle for a newcomer like Cursor. While Cursor can likely match the "intelligence" or "coding capability" of a frontier model through raw scale, the efficiency of its defense becomes the real differentiator: * Cursor’s Potential Challenge: Without the "Geometric Gating" expertise, Cursor may be forced to use External Guardrails. This would make their "full stack" 40x more expensive to run in a secure enterprise environment compared to a "safely aligned" Anthropic model. * @AnthropicAI Advantage: Because Anthropic’s safety is a commodity feature of its architecture (using the model's own activations), they can offer a "secure" agent for a fraction of the price. Safety is not an add-on; it is a fundamental property of the model's geometry. Conclusion: The "Invisible Security" Era The differentiator for Cursor vs. Anthropic will likely not be "who can write better code," but rather "who can run a secure agent most efficiently." If Cursor is "observing the observer" to build their model, they are playing a game of Capability Scaling. Meanwhile, Anthropic is playing a game of Informational Economy. For massive organizations, the "Intrinsic Security" provided by the Kaplan-style physics framework becomes the deciding factor. It transforms safety from a "Premium Tax" into an "Invisible Utility," making Anthropic’s models the more stable "physical" foundation for agentic intelligence.






1/ 🛡️ The "Security Tax" is being abolished in real-time. @AnthropicAI just dropped a paper that effectively reprices the entire AI cybersecurity market. If you’re a CISO paying for third-party LLM "firewalls," your bill just became a lot harder to justify. 📉 Here is the thesis on the "Great Internalization." 🧵 2/ 🧬 The Internal Moat vs. The External Wrapper 🧬 Most AI security startups are "Black Box" operators. They sit outside the model, sniffing text like a TSA agent at an airport. It’s slow, expensive, and easy to bypass with a clever prompt. Anthropic is playing a different game. 3/ 🧠 Linear Probe Ensembles 🧠 Instead of just looking at the output, @anthropicai is using "representation re-use." They are looking at the model's internal activations—the "brain waves" of the weights. They spot malicious intent before the first token is even generated. 4/ ⚡ 40x Efficiency is the Market Killer ⚡ By moving defense from the "API Wrapper" layer to the "Inference Layer," they’ve slashed the cost of safety by 40x. We’re talking about a move from 24% compute overhead to a negligible ~1%. Safety is becoming a feature, not a standalone product. 5/ 🕵️♂️ Exchange Classifiers 🕵️♂️ Legacy filters miss "slow-burn" jailbreaks that happen over 10+ turns. Anthropic’s new system evaluates the entire exchange history natively. The multi-turn loophole? Closed. 🔒 6/ 📉 TAM Compression is Coming 📉 Just as Claude’s legal workflows repriced accounting firms, this research reprices "LLM Security." When the model lab gives you elite protection for ~0% latency and ~0% cost, the "AI Firewall" startup market gets compressed overnight. 7/ 🌊 The Shift to the "Outer Loop" 🌊 Third-party security vendors must now pivot or die. If the labs own the "Inner Loop" (model safety), vendors must move to the "Outer Loop": * Identity & Auth for Agents * Governance & Compliance * Data Privacy (DSPM) 8/ 📖 Read the Full Deep-Dive 📖 I broke down the economics and the "Geometric Gating" behind Jared Kaplan’s latest work in my Substack. The Signal: When Safety Becomes a Commodity. 🔗 [open.substack.com/pub/bretkerr/p…] 9/ Tagging some of the builders and thinkers watching this space closely: @anthropicai @claudeai @OfficialLoganK @saranormous @eladgil @alliekmiller @C_K_Krebs What do you think? Are we entering the era of "Invisible Security"? 🛡️✨ #AISecurity #Anthropic #CyberSecurity #LLMs #InfoSec #AI #ConstitutionalAI

Cursor is taking on Anthropic and OpenAI with a new AI coding model bloomberg.com/news/articles/…





