8en N@$$!

1.6K posts

8en N@$$! banner
8en N@$$!

8en N@$$!

@ben_nassi

🎓 Faculty, ECE @TelAvivUni | 🎩 @BlackHatEvents Review Board | Cybersecurity | LLM & AI AppSec

Israel Katılım Haziran 2018
3.8K Takip Edilen1.4K Takipçiler
Sabitlenmiş Tweet
8en N@$$!
8en N@$$!@ben_nassi·
On 𝗧𝗵𝘂𝗿𝘀𝗱𝗮𝘆, 𝗙𝗲𝗯𝗿𝘂𝗮𝗿𝘆 𝟮𝟲𝘁𝗵, 𝗮𝘁 𝟭𝟰:𝟬𝟬 𝗘𝗮𝘀𝘁𝗲𝗿𝗻 𝗧𝗶𝗺𝗲, I will present a 𝘄𝗲𝗯𝗶𝗻𝗮𝗿 titled “𝗧𝗵𝗲 𝗣𝗿𝗼𝗺𝗽𝘁𝘄𝗮𝗿𝗲 𝗞𝗶𝗹𝗹 𝗖𝗵𝗮𝗶𝗻: From Prompt Injection to Multi-Step LLM Malware.” The talk is based on joint work with Oleg Brodt, Elad Feldman, and Bruce Schneier. Registration link: webinar.connectmeinforma.com/event/register… 𝗔𝗯𝘀𝘁𝗿𝗮𝗰𝘁: The Promptware Kill Chain: From Prompt Injection to Multi-Step LLM Malware, explores the evolution of prompt injection attacks into a sophisticated seven-stage kill chain: initial access, privilege escalation, reconnaissance, persistence, command & control, lateral movement, and actions on objectives. It introduces the concept of Promptware and provides an in-depth analysis of each stage, highlighting advancements in the field over the last three years. #blackhat #infosec #webinar #prompt_injection #promptware
8en N@$$! tweet media
English
0
0
3
338
8en N@$$!
8en N@$$!@ben_nassi·
🚨 Registration is now open! 🚨 We are excited to announce that registration is officially open for the Real World AI Security Conference 2026. 📅 June 23–25, 2026 📍 Arrillaga Alumni Center, Stanford University If you work on AI security, adversarial ML, LLM safety, AI system attacks, or defenses, this event is designed for you. 👉 Register here (we have a limitation on the number of attendees): seclab.stanford.edu/RealWorldAIsec/ We look forward to bringing together the community to explore the latest advances in AI security in the real world. #AISecurity #CyberSecurity #MachineLearningSecurity #LLMSecurity #AdversarialML #AIResearch #AIConference #SecurityResearch #RealWorldAISecurity
English
1
8
72
9.2K
8en N@$$! retweetledi
דובר צה״ל אפי דפרין - Effie Defrin
לא נשכח את השבעה באוקטובר, ונמשיך לרדוף אחרי אויבי ישראל - מאדריכלי המתקפה, ועד המחבלים שלקחו חלק בטבח.
Khamenei.ir@khamenei_ir

הציונים העריצים: תבוסת יום השבת 7 באוקטובר אי אפשר להתרומם ממנה, הבאתם את הפורענות הזאת לעצמכם

עברית
91
223
3.5K
290.6K
8en N@$$! retweetledi
Black Hat
Black Hat@BlackHatEvents·
Black Hat Webcast 🚨 The Promptware Kill Chain: From Prompt Injection to Multi‑Step LLM Malware 🗓 Feb 26, 2026 • 2–3 PM ET. Join Ben Nassi as he breaks down how prompt‑injection attacks have evolved into a powerful five‑stage LLM malware kill chain. Don’t miss this fast, insights‑packed session today. Register 👉 blackhat.com/html/webcast/0…
English
0
1
12
2.6K
8en N@$$!
8en N@$$!@ben_nassi·
On 𝗧𝗵𝘂𝗿𝘀𝗱𝗮𝘆, 𝗙𝗲𝗯𝗿𝘂𝗮𝗿𝘆 𝟮𝟲𝘁𝗵, 𝗮𝘁 𝟭𝟰:𝟬𝟬 𝗘𝗮𝘀𝘁𝗲𝗿𝗻 𝗧𝗶𝗺𝗲, I will present a @BlackHatEvents 𝘄𝗲𝗯𝗶𝗻𝗮𝗿 titled “𝗧𝗵𝗲 𝗣𝗿𝗼𝗺𝗽𝘁𝘄𝗮𝗿𝗲 𝗞𝗶𝗹𝗹 𝗖𝗵𝗮𝗶𝗻: From Prompt Injection to Multi-Step LLM Malware.” The talk is based on joint work with Oleg Brodt, Elad Feldman, and Bruce Schneier. Registration link: webinar.connectmeinforma.com/event/register… #blackhat #infosec #webinar #prompt_injection #promptware
8en N@$$! tweet media
English
1
2
9
3K
Johann Rehberger
Johann Rehberger@wunderwuzzi23·
Never imagined that I'd have so many YouTube subscribers... 😀 Next stop 10k 🔥
Johann Rehberger tweet media
English
3
1
23
984
vx-underground
vx-underground@vxunderground·
Academics nerds published a research paper a few days about LLM malware and their argument for a new classification of malware dubbed "Promptware". X fucks up links a lot, they don't display properly, so the link to their academic paper will be in the post subsequent to this one. As is tradition, their academic paper is just a bunch of goobers being all philosophical about shit and including a bunch of fancy pictures and graphs. I unironically sat here and read most of this paper. Is there argument valid? Yes, but some of the examples provided are theoretical and have not existed in-the-wild (yet?). They do however provide real-life examples of LLM payloads which have been successful. I personally have not seen these techniques described, but they provided citations and they are indeed real. I do malware stuff everyday (collecting, reverse engineering, development) and I have not seen any of the papers they reference. This paper has demonstrated, unironically, there is a gap right now between LLM research and malware research. In essence, we are at the point now where LLM research is now bleeding into malware research and malware nerds may have to pay more attention. I am now a believer. LLM malware is indeed real and will become a thing. I give these academic nerds two (2) cat pictures for this interesting paper. This is the first academic paper I've read in awhile that I actually think isn't complete dog shit. My main criticism however is they kind of butcher some malware terminology. For example, they incorrectly refer to some of this LLM malware stuff as Polymorphic, but this is not polymorphic ... unless we get really, really, really flexible with definition of polymorphic malware and we make it more akin to high-level class inheritance polymorphism. It doesn't really matter that much though because I understand what they're trying to convey.
vx-underground tweet media
English
19
34
443
21.3K
8en N@$$! retweetledi
Lawfare
Lawfare@lawfare·
Attacks on LLM-based systems have evolved into a distinct class of malware execution mechanisms. Bruce Schneier, @BrodtOleg, Elad Feldman, and @ben_nassi propose a “promptware kill chain” to provide policymakers with a framework to address the escalating AI threat landscape.
Lawfare tweet media
English
1
3
3
1.9K
8en N@$$!
8en N@$$!@ben_nassi·
🚀 𝗧𝗵𝗲 𝗥𝗲𝗮𝗹 𝗪𝗼𝗿𝗹𝗱 𝗔𝗜 𝗦𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝗖𝗼𝗻𝗳𝗲𝗿𝗲𝗻𝗰𝗲 𝟮𝟬𝟮𝟲 🚀 We are excited to announce the first 3 day 𝗥𝗲𝗮𝗹 𝗪𝗼𝗿𝗹𝗱 𝗔𝗜 𝗦𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝗖𝗼𝗻𝗳𝗲𝗿𝗲𝗻𝗰𝗲, taking place on 𝗝𝘂𝗻𝗲 𝟮𝟯–𝟮𝟱, 𝟮𝟬𝟮𝟲, at 𝗦𝘁𝗮𝗻𝗳𝗼𝗿𝗱 𝗨𝗻𝗶𝘃𝗲𝗿𝘀𝗶𝘁𝘆. The conference is intended to brief the most impactful AI security work presented over the past year at 𝗹𝗲𝗮𝗱𝗶𝗻𝗴 𝗶𝗻𝗱𝘂𝘀𝘁𝗿𝘆 𝗰𝗼𝗻𝗳𝗲𝗿𝗲𝗻𝗰𝗲𝘀 (Black Hat, DEF CON, RSAC, CCC) and 𝘁𝗼𝗽 𝗮𝗰𝗮𝗱𝗲𝗺𝗶𝗰 𝘃𝗲𝗻𝘂𝗲𝘀 (CCS, IEEE S&P, USENIX Security, NDSS). 𝗧𝗵𝗶𝘀 𝗶𝘀 𝗮 𝗻𝗼𝗻-𝗽𝗿𝗼𝗳𝗶𝘁, 𝗰𝗼𝗺𝗺𝘂𝗻𝗶𝘁𝘆-𝗱𝗿𝗶𝘃𝗲𝗻 𝗰𝗼𝗻𝗳𝗲𝗿𝗲𝗻𝗰𝗲 focused exclusively on technical AI security talks with real-world impact on deployed AI systems. The goal is to curate a concise agenda that distills the most important advances in AI security from the past year, while bringing together 𝗶𝗻𝗱𝘂𝘀𝘁𝗿𝘆 𝗽𝗿𝗮𝗰𝘁𝗶𝘁𝗶𝗼𝗻𝗲𝗿𝘀 𝗮𝗻𝗱 𝗮𝗰𝗮𝗱𝗲𝗺𝗶𝗰 𝗿𝗲𝘀𝗲𝗮𝗿𝗰𝗵𝗲𝗿𝘀 to establish new connections, collaborations, and future research directions. We will share additional details soon. Here is the link to the website of the conference: seclab.stanford.edu/RealWorldAIsec/ #security #ai #llm #ai_security #cybersecurity #infosec
8en N@$$! tweet media
English
1
12
29
2K
8en N@$$! retweetledi
Chris Wysopal
Chris Wysopal@WeldPond·
“Prompt injection” is a misleading label. What we’re seeing in real LLM systems looks a lot more like malware campaigns than single-shot exploits. This paper argues LLM attacks are a new malware class, Promptware, and maps them to a familiar 5-stage kill chain: • Initial access (prompt injection) • Priv esc (jailbreaks) • Persistence (memory / RAG poisoning) • Lateral movement (cross-agent / cross-user spread) • Actions on objective (exfil, fraud, execution) If you’ve ever thought: “why does this feel like 90s/2000s malware all over again?", that’s the point. Read the paper ⬇️ arxiv.org/html/2601.0962…
English
1
5
12
917
8en N@$$! retweetledi
AISecHub
AISecHub@AISecHub·
How Prompt Injections Gradually Evolved Into a Multi-Step Malware - arxiv.org/pdf/2601.09625 In this paper, we propose that attacks targeting LLM-based applications constitute a distinct class of malware, which we term promptware, and introduce a five-step kill chain model for analyzing these threats. The framework comprises Initial Access (prompt injection), Privilege Escalation (jailbreaking), Persistence (memory and retrieval poisoning), Lateral Movement (cross-system and crossuser propagation), and Actions on Objective (ranging from data exfiltration to unauthorized transactions). By mapping recent attacks to this structure, we demonstrate that LLM-related attacks follow systematic sequences analogous to traditional malware campaigns. The promptware kill chain offers security practitioners a structured methodology for threat modeling and provides a common vocabulary for researchers across AI safety and cybersecurity to address a rapidly evolving threat landscape. @ben_nassi, @schneierblog, @BrodtOleg - @TelAvivUni, @Kennedy_School, @Harvard, @munkschool, @UofTNews, @bengurionu #LLMSecurity #PromptInjection #Promptware #AIAttacks #KillChain #Cybersecurity #Jailbreak #AgentSecurity #ThreatModeling #AdversarialAI #MalwareAnalysis #RAGSecurity
AISecHub tweet media
English
1
7
28
1.6K
8en N@$$! retweetledi
Oleg Brodt
Oleg Brodt@BrodtOleg·
1/3 The Promptware Kill Chain: In a new paper co-authored with Ben Nassi @ben_nassi and Bruce Schneier @schneierblog , we analyze how prompt injections gradually evolved into a multi-step malware that consists of 5 steps: Link to the paper: arxiv.org/abs/2601.09625
Oleg Brodt tweet media
English
1
1
1
130
8en N@$$! retweetledi
Johann Rehberger
Johann Rehberger@wunderwuzzi23·
Had a fantastic time presenting at the second AI Agent Security Summit! This time in SF. Great talks, great people, and great conversations. Big thanks to @zenitysec for hosting an awesome event! 🔥 And thx @mbrg0 for taking this picture.
Johann Rehberger tweet media
English
1
8
33
2.3K
8en N@$$! retweetledi
Michael Bargury
Michael Bargury@mbrg0·
Ben shares the progression of ai security vulnerabilities discoveries back in march 2024, we only knew about weak persistence mechanisms
Michael Bargury tweet media
English
1
1
3
286