Dave Kennedy@HackingDave
Alright, I've stayed away from the Mythos stuff for a little bit. Going to comment on that, but AI as a whole.
First, this AI industry is absolutely insane. I feel like I'm back in the 90s/2000s with innovation, but it's not tempered or methodical - it's pure chaos.
Everyday there is some AI-dude-bro (or gal) clawing for followers claiming end of cybersecurity, end of software engineering, or this breakthrough changes everything. We're seeing the "streamer" effect of video games now exploding in every industry that hasn't been in whatever industry, but is now a AI-expert thus an expert in anything AI touches because they can prompt.
Largely it's not, but what it is doing is requiring us to understand what AI will do to virtually every industry in the future. I'm sitting here right now at a conference I'm presenting at, and I spoke with an individual which was like man... I'm just trying to get through this SAP implementation at my company, I don't even know where to start with AI at the moment.
We are still in the extreme early stages of what AI can do, and I think that's really the exciting part - we are at the infancy stages of this.
Most enterprise can't handle AI, as most companies couldn't handle agile workflow when it came out either, it took time, but eventually adopted.
I won't dive deep into the scalability of releasing AI to the masses based on compute, power, or subsidies because these are real hurdles we need to solve. As you can see with Claude's spike in popularity is causing them to have to dumb the model down upwards of 65% just to stay afloat (Claude is absolutely awful right now for coding - beware).
Mythos is cool, really cool - but it's not earth shattering as claimed. The potential here we are seeing a glimpse of what can actually happen though.
The ability to do extremely complex tasks, with insane context windows, and high-end reasoning. But, what we saw from other current frontier models including open LLMs, they were able to find the same issues, but had to be specifically targeted towards those code sections because of context limitations and complex task reasoning which was drastically improved in Mythos.
What does this mean? Basically. Nothing. It's a lot of marketing hype - but it does prove out that as these models become smarter, it will inevitably produce much better code, be able to work in mind blowing fashions that we haven't seen before - but it will all come down to cost. Right now Mythos is extremely expensive because of the compute needed, and we may solve that over time, but it's not there yet.
The subsidies right now means AI is not ready. Scale is our biggest bottleneck right now and until that's solved, the industry will not move as fast as it could.
What's particularly impressive is how the open models are starting to perform on par (or better) with the frontier models and become way more efficient without restrictions (turboquant) as an example.
Our ability to use near parity models on our own hardware will only continue to get better which is a huge threat for these companies. I at first looked at Cursor's implementation of Kimi as they were falling behind because it wasn't "their own model". That wasn't accurate, its that the open models are performing substantially better than from 6 months ago, and will soon be leading the charge or close to it.
What does this mean for cybersecurity? The industry is changing rapidly, and I absolutely freaking love it. We needed a swift kick in the ass in this industry that was largely stagnant for the past 10-15 years.
What used to be a handful of incredibly talented security researchers that knew systems internals, savants at reverse engineering and reading through millions of lines of ASM is now being afforded to the masses, but still has a long way to go.
The reason AI is so good at doing this stuff is because they paved the way, and will continue to do so in different ways. Not eliminated or removed, enhanced and better than ever. AI is single handedly the largest theft of plagiarism that has ever happened in human history. I just got a 10K check from Claude for ripping off my Metasploit book to train its model to be smarter actually :P
I am all for things that make the world a safer place. Our goal in cybersecurity is to fix the world, make it less harmful when using technology - we should be adopting this. Note that it's going to come with a ton of fluff, hype, doomsday predictions, people that are now AI exports or coding experts but have never written a line of code themselves. That's all to be expected if you have ever been to an RSA conference. AI will product meaningful change in an industry that needed it.
Cybersecurity is much more than bugs or defects, it's protecting against risk. AI is a new emerging risk, it's going to keep us insanely busy right now, and for the foreseeable future.