Yoni Rozenshein
783 posts

Yoni Rozenshein
@1yoni
Security, internals, cryptography, math, and AI. AI-cyber-ing at @Irregular

I factored the number RSA1024-1 using my home-built QPU stack; alarming sign that RSA1024 will soon be broken. I'm choosing Full Disclosure, in the interest of transparency and Science advancement: gist.github.com/veorq/25bee6ef… Non-ZK proof that the correct RSA1024 was used: #RSA-1024" target="_blank" rel="nofollow noopener">en.wikipedia.org/w/index.php?ti…
@yuvadm your move

claude-red is a curated library of offensive security skills designed for the Claude skills system. Each skill is a structured SKILL.mdfile that primes Claude with expert-level methodology for a specific attack surface from SQLi to shellcode, EDR evasion to exploit development. Resource: github.com/SnailSploit/Cl…

More than seven years since Forbes launched its first AI 50 list, the artificial intelligence industry has exploded, growing more expansive and increasingly too crowded for a single list to capture. As venture capital firms continue to pour money into AI, a new tier of startups has emerged: younger, earlier-stage companies building fast and raising faster as they try to rival their more established peers. That’s why this year, for the first time, Forbes is introducing the AI 50 Brink List, spotlighting 20 of the most promising Seed and Series A-stage startups building in artificial intelligence. Read more: forbes.com/sites/sofiachi… #ForbesAI50 Photos: Nectar Social, Resolve AI, Periodic Labs, Ashley Maxwell, Giga, Jim Vetter, Studio B Portraits, Axiom Sponsoring Partner @MayfieldFund



The AI security conversation you don't want to miss at #RSAC: @Irregular CEO @Dan_lahav and leaders from @wiz_io are bringing together two of the leading voices in Frontier AI security: John "Four" Flynn from @GoogleDeepMind and Logan Graham, who leads the Frontier Red Team at @AnthropicAI. When: March 25 · 5PM · Wiz House, SF Register 👇







A password like G7$kL9#mQ2&xP4!w looks strong. Every password checker rates it "excellent." But researchers at Irregular just published something worth knowing: that exact string appeared 18 out of 50 times when Claude was asked to generate a password. The reason: LLMs are prediction engines. They're optimized for plausibility, not randomness. Claude's passwords had ~27 bits of entropy. A truly random password has ~98. Password checkers can't detect this. They see character variety. They can't see statistical distribution. It gets worse for developers: Irregular also found AI coding agents hardcoding these patterns directly into Docker configs and .env files — without the developer knowing. They found the patterns on GitHub. Are you auditing AI-generated codebases for hardcoded credentials? #CyberSecurity #PasswordSecurity #DevSecOps #AppSec Author: T.O. Mercer




LLMs are terrible password generators – and coding agents are making it worse. We tested ChatGPT, Claude, and Gemini, and found the passwords they produce look strong but are fundamentally weak. Here's what we found 🧵


LLMs are terrible password generators – and coding agents are making it worse. We tested ChatGPT, Claude, and Gemini, and found the passwords they produce look strong but are fundamentally weak. Here's what we found 🧵








