
Why securing AI is harder than anyone expected and the approaching AI security crisis with @SanderSchulhoff Sander is a leading researcher in the field of adversarial robustness, which is the art and science of getting AI systems to do things they shouldn't do, through jail-breaking and prompt injection. What Sander shares in this conversation is essentially that all of the AI systems we use day to day are open to being tricked into doing things they shouldn’t, that there isn’t really a solution to this problem, and that the companies that try to sell solutions for this are mostly BS. This conversation has nothing to do with AGI, this is a problem today. And that that the only reason we haven’t seen massive hacks and serious damage from AI tools is so far because they haven’t been given that much power yet, and they aren’t that widely adopted yet. But with the rise of agents (who can take actions on your behalf), and robots, and even AI powered browsers, the risk is going to increase very quickly. This is a really important topic and that opened my mind, and scared me, and it's something that we all need to have a basic understanding of as AI becomes more prevalent in our lives. Inside: 🔸 A primer on jailbreaking and prompt injection attacks 🔸 Why AI guardrails don’t work 🔸 Why we haven’t seen major AI security incidents yet (but soon will) 🔸 Why AI browser agents are extremely vulnerable 🔸 The practical steps organizations should take instead of buying ineffective security tools 🔸 Why solving this requires merging classical cybersecurity expertise with AI knowledge Listen now 👇 • YouTube: youtu.be/J9982NLmTXg • Spotify: open.spotify.com/episode/0IZE32… • Apple: podcasts.apple.com/us/podcast/the… Thank you to our wonderful sponsors for supporting the podcast: 🏆 @datadoghq — Now home to Eppo, the leading experimentation and feature flagging platform: datadoghq.com/lenny 🏆 @getmetronome — Monetization infrastructure for modern software companies: metronome.com 🏆 @gofundme Giving Funds — Make year-end giving easy: gofundme.com/lenny






























