Michael
1.1K posts


higher, relentlessly grateful to be along for a truly generational climb

Octane is looking for elite security researchers. We’re launching new initiatives and want to collaborate with SRs who have a proven track record in bug bounties, audit competitions, or high-impact vulnerability research. DM us with your best work if interested.






Looking for an SDR who wants to make it in AI-native security. @octane_security What you'll do: Own business development with startups and companies bringing secure code review into their workflows. Hit your numbers and you'll start leading full sales cycles fast, on a direct path to promotion. What we're looking for: You've worked with developers. You understand security, and you see what AI is doing for teams that move early on the right tools. You know nothing gets handed to you. You show up, you build, you rise to the occasion every day. Location: NY or SF preferred. Remote considered if you have the track record and the habits to back it up. Think that's you? DM me.


AWS runs one of the most elite cybersecurity operational teams in the world. Google has Mandiant, who are widely considered one of the most competent threat intelligence and security response orgs. Microsoft is not exactly a pillar of cybersecurity but invested a lot of money in Anthropic recently, something that both Google and Amazon already did. JPMorgan is a big buyer of software but will not go around sharing their insights. Apple is notoriously uncooperative from a security perspective. Cisco is a walking vulnerability risk. CrowdStrike and Palo Alto Networks are arguably the two most important cybersecurity companies today but also have no real presence in AppSec. Announcing what’s allegedly one of the most competent pen testing and AppSec tools in the industry, then refusing to make it available except for a selection of partners, none of whom have deep AppSec background is just weird. There are allegedly another 40 orgs that will get access to it, but they are framed as building or maintaining critical software infrastructure, not operationalising AI for application security.


My thoughts on Anthropic’s Mythos and some interesting takes on AI codegen vs. AI security from the @nytimes… First, agentic workflows are clearly increasing software output much faster than anyone can review it. More code, more complexity, more edge cases, and more risk. And there just aren’t enough application security engineers in the world to absorb that increase on their own. Second, AI is compressing the time and skill required to find and exploit vulnerabilities. To stay anywhere near the edge of the curve, “you have to fight A.I. with A.I.” as @fdesouza puts it. Anthropic’s release today of Mythos takes us even further into this future. Anthropic says Mythos is its most capable frontier model to date, with major jumps over Opus 4.6 on coding and reasoning benchmarks. It says Mythos has already identified zero-days in major operating systems and browsers, and has the ability to exploit them too. Put all this together and the conclusion is pretty straightforward: The old approach to security no longer matches the new reality of software. Agentic workflows easily lead to exponentially larger and faster-moving attack surfaces. Every code change carries more complexity and risk that even the engineer who pushed it probably doesn’t understand. This is why I think the market is going to separate pretty quickly between companies that treat AI as a quick and easy growth hack, and those that understand it as a full-stack operational change. The challenge for security providers will be in offering a genuinely different vision of what full-stack operational security (systems, people, and processes) looks like, rather than just building harnesses around whichever LLM is leading the benchmarks that week. We need systems that can reason about code at the same speed it executes, operate continuously inside the development loop, and actually help teams find meaningful risk before it reaches production. Systems that increase the bandwidth of talented security engineers rather than just generating more noise for them to wade through. The deeper bottleneck here is verification. A paper titled “Some Simple Economics of AGI” makes the point clearly: as the cost to automate falls, the cost to verify does not fall nearly as fast. Human oversight is still constrained by time, context, judgment, and hard-won experience. In security, that means the real scarcity is no longer just people who can review alerts by hand. It’s security talent and engineering talent that can configure these systems around a team’s actual codebase and threat model, and step in to triage the ambiguous or high-stakes cases with automation. That’s especially true in domain-specific systems like blockchain infrastructure, payment rails, and other high-stakes software. The most important bugs there are often niche, contextual, and system-wide vulnerabilities. They sit in assumptions, state transitions, edge-case logic, and system interactions that require unique context and domain-specific models to identify and verify. There will be room for multiple platforms in AI security. But the most valuable ones will be those built by people with deep understanding of specific domains, who can integrate tightly with customer workflows, and who help teams separate theoretical noise from actual risk with unique or proprietary data that helps the model detect high-signal, domain-specific findings. But early frontier model access is not a durable edge. Competitive pressure will push frontier capabilities outward over time, whether through APIs, cloud partnerships, managed access, or even industrial espionage that makes its way into open-source models. What creates a truly durable advantage is having the security research talent, experience, domain-specific data and customer context required to make those models produce unique, expert-level findings that others cannot. These general models do not perform at maximum capability out of the box, they require expert inputs to produce their best outputs. This is what we do at Octane. We combine the best frontier models optimized for security use cases and our own domain-specific models together with high-end security research. Our researchers configure and instrument the system to get the most optimal findings, then provide continued support as the platform surfaces bugs autonomously. This is how we see security scaling to meet the threats we all now face.

BREAKING: 13 shots fired into home of Indianapolis councilor; note reading “No data centers” left at scene.


How ARE works: Phase 0: We work with the client to define the context of their codebase: architecture, threat model, economic assumptions, etc. Phase 1: AI explores at scale. It maps the codebase, traces execution paths, and explores the long tail of edge cases humans can’t exhaust on their own. Phase 2: Human researchers evaluate the AI’s signals. They cut down the noise, note the serious attack paths, prune hypotheses, and refine the overall context. Phase 3: Repeat. Each pass goes deeper along the paths that matter, until meaningful attack vectors have been covered.








