
I’m very bullish on AI-assisted security work in 2026.
Models are now good enough to spot real vulnerabilities and even propose reasonable patches.
Especially when you give them the right context about your system.
That doesn’t replace a human security engineer, to be clear.
But it massively upgrades their throughput and coverage.
Instead of manually combing through every suspicious pattern, you can point the model at a repo and say, “show me the sharp edges first.”
People are already getting serious bug bounties and client work off this combo of domain expertise and model leverage.
You still need to know which findings matter, how to exploit them, and how to fix them.
That’s the human part.
But the grunt work of scanning, summarizing, and generating draft remediations?
That’s getting increasingly automated.
That’s the exact lane my static analysis tool sits in.
It turns historically noisy security work into something high-signal and repeatable.
1 - Run a scan
2 - Get a curated list of likely issues
3 - Click a button to open a PR that tries to fix them
It’s not magic, but it’s a huge jump from “grep and vibes.”
As models keep getting smarter and context windows keep expanding, this niche only gets more overpowered.
English

