Faiyaz Ahmad

537 posts

Faiyaz Ahmad banner
Faiyaz Ahmad

Faiyaz Ahmad

@thehacktivator

Bug Bounty Hunter | YouTube Content Creator @BePracticalTech

Katılım Kasım 2015
101 Takip Edilen5.7K Takipçiler
Faiyaz Ahmad
Faiyaz Ahmad@thehacktivator·
I am giving away my videos on finding SSRF, full practical demonstration for free. This YouTube playlist is a complete deep dive into SSRF—from understanding the fundamentals to actually exploiting real-world scenarios step by step. No fluff, just practical knowledge you can apply while hunting. You’ll see how SSRF works in real applications, how to identify vulnerable endpoints, and how to turn small findings into impactful reports—all demonstrated live. If you're serious about bug bounty or want to level up your recon and exploitation skills, this is something you shouldn’t miss. Link: youtube.com/watch?v=Fwahyq…
YouTube video
YouTube
Faiyaz Ahmad tweet media
English
5
27
171
4.3K
Faiyaz Ahmad
Faiyaz Ahmad@thehacktivator·
What if you could learn bug bounty reconnaissance — completely FREE? On my YouTube channel, I’ve created multiple playlists covering practical cybersecurity topics. One of them is my Reconnaissance playlist, where I break down: • How to approach recon from scratch • Subdomain enumeration, asset discovery & attack surface mapping • Real-world techniques used in bug bounty hunting • How to automate recon using free tools Everything is practical, beginner-friendly, and focused on what actually works in real scenarios — without spending money. If you're serious about bug bounty, this playlist (along with others on the channel) can give you a solid foundation. Check it here: youtube.com/watch?v=i3-xJ-…
YouTube video
YouTube
Faiyaz Ahmad tweet media
English
0
42
208
6.9K
Faiyaz Ahmad
Faiyaz Ahmad@thehacktivator·
Yeah, I agree — right now it’s definitely challenging and local models aren’t fully there yet for every use case. But I feel sooner or later we’ll be pushed toward local LLMs anyway — due to cost, control, privacy, and vendor lock-in. And honestly, I’ve already seen promising results. I tried Qwen 3.5 locally and it was able to identify some really interesting vulnerabilities. Also, after fine-tuning LLaMA 3.1 (8B) specifically for XSS payload generation, it became way more creative and accurate — easily 20x better for that use case.
English
0
0
1
62
onur🇹🇷
onur🇹🇷@onurturkeshan·
@thehacktivator “I think” is impossible for this job without training local LLMs, defining and finetuning enough data/veriset. None of the local models I tried worked, I already use claude as ai, but a local model wouldn’t be bad at all.
English
1
0
1
73
Faiyaz Ahmad
Faiyaz Ahmad@thehacktivator·
If you're using frontier LLMs for pentesting, you're basically training the very systems you should be questioning. Think about it. Every request you send, every response you analyze, every tool call you make — it’s not just your workflow. It’s data. Valuable data. Target structures, endpoints, payloads, sometimes even credentials or sensitive logic — all flowing straight into someone else’s model. While you’re trying to find vulnerabilities, you might also be unknowingly helping improve their detection, defenses, and intelligence. And what do you get in return? No ownership. No control. No guarantee your methods stay private. No real edge. You’re contributing to a system that learns from you faster than you can benefit from it. Personally, I believe local LLMs are the real future for serious pentesters. Full control, better privacy, and the ability to truly own your workflows and data — that’s where the real advantage lies. Let me know what you guys think :)
English
9
2
32
3.1K
ExploitKid
ExploitKid@yashwanth2207·
@thehacktivator this is actually a huge blind spot most people miss been running local models for this exact reason - can't trust that my recon patterns aren't getting fed back into detection systems the irony of using claude to find vulns while literally teaching it how we think is wild
English
1
0
1
101
Faiyaz Ahmad
Faiyaz Ahmad@thehacktivator·
I get your point — this is definitely a great time to learn and experiment. But relying completely on a cloud model for your business is hard to sustain long term. You’re essentially building on intelligence you don’t own. It’s like having one person doing all the critical work in your company — things are fine until they raise prices, change rules, or build a competing product. Right now it’s cheap and useful, but that’s how dependency starts. If something feels “free” or underpriced, the real cost usually comes later.
English
0
0
1
80
Mark Ivashinko
Mark Ivashinko@MIvashinko·
Valid concern long-term, but right now we're in a window where LLMs are accessible, somewhat cheap, and genuinely useful for offensive security. Worrying about training data leakage while most people haven't even figured out how to use these tools for finding bugs seems kinda counter productive. The opportunity to learn and build right now is enormous.
English
1
0
0
95
Faiyaz Ahmad
Faiyaz Ahmad@thehacktivator·
You may be right, and I get your point about policies and zero-retention agreements—they do help to some extent. But personally, I still believe we should focus on improving our own LLMs rather than relying too heavily on frontier models. Right now they seem affordable and convenient, but realistically, most of these providers are operating at a loss. That won’t last forever. At some point, either pricing changes, restrictions increase, or priorities shift—and that’s when depending on them too much could become a problem. Building and refining local or self-controlled models might be harder today, but it feels like a more sustainable path in the long run.
English
0
0
1
153
Daniel Knight
Daniel Knight@DanielKnightCEO·
We use frontier models for our agent. We have zero retention and have in writing that providers don't use our data for training. These are policies other technologies like cloud have had like data retention agreements, pre-AI. I just do not think the model providers are all that interested in some random XSS...
English
1
0
1
184
Faiyaz Ahmad
Faiyaz Ahmad@thehacktivator·
What if you could learn AI hacking and pentest automation — completely FREE? On my YouTube channel, I’ve created multiple playlists where I break down practical cybersecurity topics step by step. One of them is my AI Hacking playlist, where I show: • How AI applications can be hacked • Common vulnerabilities in AI systems • How to leverage AI to build pentesting automation • How you can experiment and build these things without spending money Everything is explained in a simple and practical way so anyone interested in cybersecurity can start exploring it. If you’re curious about the future of AI + hacking, this playlist (and many others on the channel) might be helpful. Check it here: youtube.com/watch?v=ANNYAl…
YouTube video
YouTube
Faiyaz Ahmad tweet media
English
3
153
799
27K
Faiyaz Ahmad
Faiyaz Ahmad@thehacktivator·
What if you could build an AI agent that helps you find vulnerabilities while you supervise the process — even without prior knowledge of AI or automation? Recently, I experimented with building an AI-powered vulnerability assessment workflow using n8n. The idea is simple: let AI analyze reconnaissance data and highlight potentially interesting areas, while the human researcher guides the investigation and makes the final decisions. This approach does not replace manual testing. Security research still requires human thinking and creativity. But automation can help reduce repetitive work and allow researchers to focus more on deeper testing. In this video, I walk through how this workflow works and how you can build something similar yourself. Video: youtu.be/ANNYAl6mL_U
YouTube video
YouTube
Faiyaz Ahmad tweet media
English
3
13
82
3.9K
HackenProof
HackenProof@HackenProof·
Which bug hunter inspires you most right now? Tag them below👇
English
48
2
74
6.3K
Faiyaz Ahmad
Faiyaz Ahmad@thehacktivator·
You’ve seen phpinfo.php in a pentest — but what if it’s not just a harmless info leak? Most people dismiss it as a low-impact artifact. But I’ll show how chaining it with a specific misconfiguration can expose sensitive data, bypass protections, and create a path to escalation. The video breaks down the exact steps to combine phpinfo.php with a common server-side flaw. You’ll see how this pairing can be weaponized in real-world scenarios — and why hunters often overlook it. Check it out here: youtube.com/watch?v=C-GIrO…
YouTube video
YouTube
English
0
22
116
10.5K
Faiyaz Ahmad
Faiyaz Ahmad@thehacktivator·
What if AI could automate penetration testing — and do it better than humans? I installed Strix, an open-source AI agent, and tested its ability to perform real penetration testing tasks. The results weren’t just theoretical — we ran it through practical scenarios to see if it could find vulnerabilities without human input. This isn’t just about hype. The video breaks down whether AI tools like Strix could replace or augment human hackers, and what that means for security professionals. I’ll share my take on the risks and opportunities — because this tech is already here, and it’s changing the game. Check it out here: youtube.com/watch?v=Pl0nHe…
YouTube video
YouTube
Faiyaz Ahmad tweet media
English
3
14
91
5.1K
Faiyaz Ahmad
Faiyaz Ahmad@thehacktivator·
You’ve found a reflected XSS — but what if it’s just the beginning? Combining it with a CORS misconfiguration can amplify the overall impact of your findings. I’ll show how these two vulnerabilities work together to bypass security controls and escalate privileges. You’ll see the specific techniques to chain these flaws and demonstrate real-world exploitation. This isn’t theory — it’s a live demo of how attackers can bypass protections using a low-severity issue as a foothold. Check it here: youtube.com/watch?v=Rz44oT…
YouTube video
YouTube
Faiyaz Ahmad tweet media
English
0
2
24
1.4K
Faiyaz Ahmad
Faiyaz Ahmad@thehacktivator·
About a few months ago, I stumbled onto something interesting with a simple 302 redirect. It led to a critical vulnerability that bypassed several security layers. I want to share what I learned. Redirects seem harmless, right? But I found one masking a hidden endpoint. It wasn’t about discovering the redirect itself, but about manipulating its destination. That’s where the real issue was hiding. I detail the entire process in a new video. We cover how to alter HTTP responses during testing and identify targets prime for this kind of manipulation. It's a small technique with big potential. Think about the automation possibilities. Check it out here: youtube.com/watch?v=lGYCqW…
YouTube video
YouTube
English
1
14
97
7.9K
Faiyaz Ahmad retweetledi
Curiosity
Curiosity@CuriosityonX·
🚨: A petri dish of human brain cells just learned to play DOOM
English
1.8K
6.3K
50.3K
32.1M
Faiyaz Ahmad
Faiyaz Ahmad@thehacktivator·
This is how you can learn bug bounty & ethical hacking for absolutely free. Starting cyber security felt impossible for me too. I was lost in tutorials and struggled with pentesting during the initial days because the resources either cost money or were too confusing to follow. I created this YouTube channel specifically to fix that problem. We focus on practical demonstrations so you don't have to guess which tools work or how to start. No jargon, no hidden fees, just free education for beginners. Join me and turn your confusion into skills today. Channel link: youtube.com/c/BePracticalT…
English
4
27
148
9.8K
Faiyaz Ahmad retweetledi
Stephen Sims
Stephen Sims@Steph3nSims·
I want to share a quick thought for people in cyber security. This will be my longest tweet ever. I’ve spoken to many lately who are having an existential crisis from the constant posts about “the end of cybersecurity jobs.” Yes, things are changing quickly. This is a significant moment for the tech industry. Change can be uncomfortable. But we’ve seen cycles like this before. • When GitHub and open source took off, people said software engineers would disappear because code was free. • When AWS and cloud computing emerged, people said infrastructure jobs would vanish. • When fuzzing and SAST tools improved, people said vulnerability research would disappear. • Virtualization would eliminate infrastructure jobs. • Mobile computing was going to end desktop dev. • Exploit mitigations would end exploitability. It didn't. Each time automation improved, the amount of software grew faster than the automation. It does feel "different" this time as it's explosive. Some roles will shrink: • repetitive pentesting • basic vulnerability scanning • tier-1 SOC monitoring But other areas are expanding rapidly: • AI system security • supply chain security • identity architecture • autonomous agent security • critical infrastructure protection Historically, every time we eliminate one class of bugs, new classes emerge. Right now people are vibe-coding entire systems, giving AI access to their machines, crossing trust boundaries, and deploying autonomous agents with excessive permissions. The legal and regulatory world is nowhere close to ready. There will absolutely be new failure modes. Humans are amazing and always adapt, finding new ways to do things. The worst thing you can do right now is fall into a doom loop. ...and I’ll be honest, I too have felt the "psychological paralysis" a few times thinking, “Is this time different?” It's especially impactful when it comes from someone I respect in the community. There are certainly unknowns, in an industry where we've become accustomed to predictability. But... the majority of those reactions are usually driven by social media, not reality. Platforms like X reward engagement, and sensational doom posts spread faster than measured thinking. If you see something like: “Holy #$%^! Opus 66.6 just found every bug in Chrome and replaced 50 startups!” …mute it and move on. Instead: Stay curious. Learn the new technology. Adapt your skillsets. Build things. We’ll get through this transition the same way we always have. If I'm wrong then Sam Altman better be right about UBI! :) I'm sure that if this tweet gets any engagement that I'll get some heat for it, but a good friend of mine reminds me often to focus on what you have control over. I'll revisit this tweet at DEF CON 40!
English
55
315
1.5K
127.3K
Faiyaz Ahmad
Faiyaz Ahmad@thehacktivator·
AI wrappers have one big limitation: you’re building on top of intelligence you don’t control. So I tried something different. I fine-tuned Llama3:8B on a custom XSS payload dataset (~1500 payloads). Before training, it struggled with things bug hunters deal with daily — broken syntax, confusion between single and double quotes, and failing in complex injection contexts. After fine-tuning, the improvement was massive. The model now generates XSS payloads surprisingly close to what frontier models produce in many scenarios. This experiment reinforced something I believe strongly: The future of AI isn’t wrappers — it’s specialized local models trained on your own data. In the end, the real advantage won’t be the model. It will be who owns the best dataset. Interesting times ahead.
Faiyaz Ahmad tweet media
English
2
2
35
3.4K