FuzzingLabs

801 posts

FuzzingLabs banner
FuzzingLabs

FuzzingLabs

@FuzzingLabs

Research-oriented Cybersecurity startup specializing in #fuzzing, Vulnerability Research & Offensive security on Mobile, Browser, AI/LLM, Network & Blockchain.

Paris Katılım Ağustos 2020
3.7K Takip Edilen8.9K Takipçiler
Sabitlenmiş Tweet
FuzzingLabs
FuzzingLabs@FuzzingLabs·
💥 We’ve just raised €1M in pre-seed funding to accelerate the development of FuzzForge. When I started FuzzingLabs, everything was bootstrapped: our audits, our trainings, our R&D. No investors, no funding. Just a passionate team obsessed with offensive security and the belief that we could build something different. Three years later, we’re 30 and we are now entering a new chapter. This funding will allow us to: - accelerate the open-source development of FuzzForge, - build its marketplace of agents and workflows, - and expand the SaaS version to automate vulnerability research at scale. A huge thanks to @class_lambda and @ergodicgroup for their strategic support and trust in our vision: --> making offensive security more intelligent, collaborative, and automated. FuzzForge is already open source and under active development. You can check it out here: 🔗 github.com/FuzzingLabs/fu…
FuzzingLabs tweet media
English
5
41
267
19.2K
FuzzingLabs
FuzzingLabs@FuzzingLabs·
🚀 New training live: Masterclass – Scapy for Offensive Security Learn how to: • Craft & manipulate packets • Build & fuzz a DNS server • Do differential fuzzing • Reproduce real CVEs • Analyze parsing & overflow bugs Hands-on. Offensive. Practical. Enroll 👇 academy.fuzzinglabs.com/masterclass-sc…
FuzzingLabs tweet mediaFuzzingLabs tweet media
English
0
8
44
2.6K
FuzzingLabs
FuzzingLabs@FuzzingLabs·
We just rewrote FuzzForge from scratch and open-sourced it. Old: Temporal + MinIO + workers + backend. Heavy. New: CLI + MCP server + containerized modules. Zero infra. 🖥️ Runs fully local 🧠 Plug your favorite LLM (Copilot, Claude, local models…) 🔗 AI agents orchestrate full security pipelines via MCP Demo: 4 modules, 3 min, 994 crashes → 3 unique bugs. AI-native security research. github.com/FuzzingLabs/fu…
GIF
English
4
30
180
10.4K
FuzzingLabs
FuzzingLabs@FuzzingLabs·
🇨🇦 FuzzingLabs at @reconmtl Montréal 2026! This June, we’re delivering 3 advanced, hands-on trainings at REcon: 🦀 Rust Development for Cyber Security 🔍 Reversing Modern Rust & Go Binaries 📡 Attacking Real-World IoT & Embedded Devices 📅 June 15–18, 2026 🔗 recon.cx/2026/en/index.… Deep technical content. Real-world targets. No fluff. See you in Montréal 👋
FuzzingLabs tweet mediaFuzzingLabs tweet mediaFuzzingLabs tweet media
English
0
4
22
2K
FuzzingLabs
FuzzingLabs@FuzzingLabs·
🚀 Open-sourcing MCP Security Hub A growing collection of MCP servers bringing security tools to AI assistants Nmap, Ghidra, Nuclei, SQLMap, Hashcat... and we're just getting started Contribute your favorite tools 🛠️ ⭐ github.com/FuzzingLabs/mc…
English
3
43
254
15.7K
FuzzingLabs retweetledi
TrendAI Zero Day Initiative
Confirmed! Julien COHEN‑SCALI of @FuzzingLabs targeted the Phoenix Contact CHARX SEC‑3150, chaining two vulnerabilities - an authentication bypass and privilege escalation - to earn $20,000 USD and 4 Master of Pwn points. #Pwn2Own #P2OAuto
TrendAI Zero Day Initiative tweet mediaTrendAI Zero Day Initiative tweet media
English
5
10
33
5K
FuzzingLabs
FuzzingLabs@FuzzingLabs·
We just published Part 1 of our deep-dive on how we’re building #FuzzForge. Security tools exist. Orchestration doesn’t. FuzzForge chains SAST + fuzzing + dynamic analysis + AI agents into auditable, adaptive workflows, not black-box “AI hacking.” This is why we’re rethinking security automation 👇 fuzzinglabs.com/build-fuzzforg…
English
4
14
50
3.3K
FuzzingLabs
FuzzingLabs@FuzzingLabs·
New feature for our #Solana static analyzer: Sol-azy 🚀 We just released Recap, a one-command way to turn any Anchor project into a clean, audit-friendly overview. It extracts signers, writables, constraints, PDAs, and memory ops into a single Markdown report, perfect for fast triage & attack-surface mapping. Full details: fuzzinglabs.com/solana-solazy-…
English
2
11
35
3K
FuzzingLabs
FuzzingLabs@FuzzingLabs·
🚀 New Course Released: Fuzzing #Windows Userland Applications (3-Day Certified Training) This is our most advanced Windows-focused training yet built for security engineers, VR researchers, and pentesters who want to master real-world fuzzing on targets like WinRAR, IrfanView, PDF-XChange, and Assault Cube, using tools such as winAFL, Jackalope, Lighthouse, and WTF snapshot fuzzing. You’ll learn how to: - Build and optimize fuzzing harnesses - Rediscover real vulnerabilities & triage crashes - Perform deep coverage analysis - Apply grammar-based & snapshot fuzzing techniques - Analyze complex Windows binaries This training was delivered at @POC_Crew #Zer0Con, and is now available to everyone. For the release, we’re offering a 15% discount for the first 10 students. 🎟️ Code: WINFUZZ15 👉 Link: academy.fuzzinglabs.com/fuzzing-window…
FuzzingLabs tweet media
English
0
11
46
3.8K
FuzzingLabs
FuzzingLabs@FuzzingLabs·
Last week, @Pat_Ventuzelo our CEO and the team were at @EUCyberWeek in Rennes, and it was an amazing experience. Three intense days meeting great people, sharing ideas, and presenting FuzzForge to dozens of teams who were genuinely excited about the project. Huge thanks to everyone who came by our booth! your feedback and energy were incredible. 👉 Explore FuzzForge: github.com/fuzzinglabs/fu…
FuzzingLabs tweet media
English
0
3
30
2.3K
FuzzingLabs
FuzzingLabs@FuzzingLabs·
@Cloudflare just learned the hard way that .unwrap() in Rust can be dangerous, especially in security-critical code. At @FuzzingLabs, we’ve been teaching this for years in our Rust Security: Audit & Fuzzing training. If you want your engineers to avoid these bugs before they hit production, here’s your chance: 🎓 Rust Security Training - Special CLOUDFLARE Discount 👉 academy.fuzzinglabs.com/rust-security-…
English
0
12
66
3.6K
FuzzingLabs
FuzzingLabs@FuzzingLabs·
New research overturns a major assumption in LLM security: even large models (600M → 13B) can be backdoored with only 250 poisoned documents. Model size doesn’t matter. Attack success depends on absolute count, not % of training data. 250 malicious documents ≈ 420k tokens = 0.00016% of the 13B training corpus. This makes data poisoning far more realistic. We track this closely while building FuzzForge for automated AI red teaming. 📄 Paper: arxiv.org/abs/2510.07192 🔗 FuzzForge: github.com/fuzzinglabs/fu… #AIsecurity #DataPoisoning #RedTeam #LLMsecurity
FuzzingLabs tweet mediaFuzzingLabs tweet media
English
1
5
16
1.9K
FuzzingLabs retweetledi
BSides Berlin
BSides Berlin@SidesBer·
Last talk for the day: AI for AppSec and Offensive Security: From Automation to Autonomy by @Pat_Ventuzelo
BSides Berlin tweet mediaBSides Berlin tweet media
English
0
2
6
1.1K