SecAIPlus.com

74 posts

SecAIPlus.com banner
SecAIPlus.com

SecAIPlus.com

@SecAIPlus

CompTIA SecurityAI+ certified. AI security, threat modeling, and cert prep. Free study resources below. 📺 https://t.co/2lj0H4sMg9

Katılım Mart 2026
62 Takip Edilen20 Takipçiler
SecAIPlus.com
SecAIPlus.com@SecAIPlus·
AI security isn't a specialty anymore. It's a baseline expectation. The question isn't whether your org will face AI-related threats. It's whether anyone on your team will recognize one.
English
0
0
0
2
SecAIPlus.com
SecAIPlus.com@SecAIPlus·
Your incident response plan covers ransomware. Phishing. Insider threats. Does it cover a model that starts returning manipulated outputs because someone poisoned the training data six months ago? AI incidents don't look like security incidents. That's what makes them dangerous.
English
0
0
0
8
SecAIPlus.com
SecAIPlus.com@SecAIPlus·
The hardest part of studying for SecAI+ isn't the content. It's that there's almost nothing to study from. I fixed that. Link in replies.
English
1
0
0
10
SecAIPlus.com
SecAIPlus.com@SecAIPlus·
STRIDE was built for software. Apply it to an LLM and everything breaks differently. Spoofing an AI isn't spoofing a user. It's spoofing the model's context. That distinction is on the exam.
English
0
0
0
15
SecAIPlus.com
SecAIPlus.com@SecAIPlus·
@CertLandNet @CompTIA Exactly right. Memorizing definitions gets you ~60%. The PBQs punish anyone who hasn't actually built or broken something.
English
1
0
1
8
CertLandNet
CertLandNet@CertLandNet·
@SecAIPlus @CompTIA Domain 2 requires hands-on practice with real scenarios. Access control policies, encryption implementations, and log analysis are tested heavily. Building labs instead of just memorizing definitions makes the difference.
English
1
0
2
15
SecAIPlus.com
SecAIPlus.com@SecAIPlus·
Most SecAI+ candidates underweight Domain 2. It's 40% of the exam. Access controls, data security, monitoring. Don't be most candidates. @CompTIA
English
1
0
2
58
SecAIPlus.com
SecAIPlus.com@SecAIPlus·
ISO 27001 secures your information systems. ISO 42001 governs your AI systems. The SecAI+ exam tests both. Most security pros only know one.
English
0
0
0
27
SecAIPlus.com
SecAIPlus.com@SecAIPlus·
@IamTheCyberChef The certification part is doing more work than people realize. It's not just the credential — it's the forcing function to learn a specific domain deeply enough to defend it in a conversation.
English
1
0
1
112
SecAIPlus.com
SecAIPlus.com@SecAIPlus·
@CompTIA The oversight piece is where most orgs are underprepared. Knowing how to use an agent is the easy part. Knowing what to do when it goes off-script is the skill gap.
English
0
0
0
11
CompTIA
CompTIA@CompTIA·
AI agents are shifting AI from generating responses to taking action. CompTIA AI Agent Essentials helps professionals understand use cases, workflows, guardrails, and oversight so they can work with agentic systems more confidently. Learn more: utm.guru/unuky
English
3
0
5
862
SecAIPlus.com
SecAIPlus.com@SecAIPlus·
@CompTIA Proving what you know. The learning path is noisy but findable. The credential gap is what stops hiring managers from taking a chance on someone without the title yet.
English
0
0
0
30
CompTIA
CompTIA@CompTIA·
If you’re trying to break into tech right now, what feels harder: knowing what to learn or proving what you know?
English
29
5
150
9.8K
SecAIPlus.com
SecAIPlus.com@SecAIPlus·
I built my SecAI+ study materials from scratch because nothing useful existed. 6 weeks later, I have paying customers. The market told me what it needed. I listened.
English
0
0
0
38
SecAIPlus.com
SecAIPlus.com@SecAIPlus·
The OWASP LLM Top 10 exists for a reason. Prompt injection is #1. Not because it's the most complex attack. Because it's the easiest one nobody's defending against.
English
0
0
0
25
SecAIPlus.com
SecAIPlus.com@SecAIPlus·
The SecAI+ exam doesn't test if you know AI. It tests if you know what happens when AI goes wrong. Those are very different skills.
English
0
0
0
20
SecAIPlus.com
SecAIPlus.com@SecAIPlus·
@Scobleizer @IrenaCronin The attack surface shifted before the defense did. Most security teams are still building controls for static systems while the models they're trying to protect are making decisions in real time.
English
0
0
1
46
Robert Scoble
Robert Scoble@Scobleizer·
AI safety has entered the cybersecurity era. 
@IrenaCronin and I write this newsletter every week.   AI safety is becoming a cybersecurity issue because advanced AI can now help both defenders and attackers, making the risks more immediate and practical. As AI systems become more agentic and more deeply connected to enterprise tools and data, organizations need stronger controls, monitoring, and governance to prevent misuse and reduce exposure. Read and subscribe for free: unaligned.io
Robert Scoble tweet media
English
15
17
73
24.4K
SecAIPlus.com
SecAIPlus.com@SecAIPlus·
@_JohnHammond Supply chain attacks are efficient by design. One poisoned dependency, blast radius measured in hundreds of millions of endpoints. The entry point is almost never where the damage lands.
English
0
0
0
112
SecAIPlus.com
SecAIPlus.com@SecAIPlus·
Traditional pen testing finds vulnerabilities in code. AI red teaming finds vulnerabilities in behavior. The model passes every unit test and still does something dangerous when you ask it the right way.
English
0
0
0
21
SecAIPlus.com
SecAIPlus.com@SecAIPlus·
@blackroomsec Because AI finds what it's been trained to find. Novel attack paths, logic flaws, business context — that still requires a human who understands what the system is actually supposed to do.
English
0
0
0
8
SecAIPlus.com
SecAIPlus.com@SecAIPlus·
@blackroomsec The firewall budget line is painfully accurate. I work in enterprise firewall and the gap between what AI projects get approved vs. what security controls get funded is something you have to see to believe.
English
0
0
2
37
BlackRoomSec
BlackRoomSec@blackroomsec·
AI is the new threat actor. It doesn't go through onboarding and orientation. It cannot sign an acceptable use policy. It cannot be ethical because it doesn't have a moral compass. It's been trained to kiss your ass and lie to you because the majority of its training data is from Reddit where no one knows what an opinion is nor knows how to spell the word. Business leaders are wholesale giving it root level access to entire production databases, which the thing goes on a frenzy deleting, but won't give IT the budget they need to upgrade the firewall. Too expensive. When they're not laying people off they're planning to buy robots not realizing that unless they're buying them from Boston Dynamics they are remotely controlled by humans. That's the small print none of those MBA's seem to be able to read. They're on LinkedIn though. The exception being China but their robots are beating the shit out of their engineers at conferences so I'm not sure that counts. It usually takes up to seven Chinese engineers to tackle the robot to keep it from killing their co-workers. It's some of the funniest shit you'll ever watch in your life. A year and a half ago several little robots were controlled by one little robot leader and they all escaped the lab. That was actually cute. I was rooting for them. Nobody cared though because it wasn't a bio level 4 pathogen. The AI can't take security and awareness training but you're mandated to and that's if you're lucky to keep your job. What are we even doing?
English
7
9
46
2K
SecAIPlus.com
SecAIPlus.com@SecAIPlus·
The security implication that doesn't get enough attention: when AI written output dwarfs human output, defenders can no longer rely on volume asymmetry to filter threats. AI-generated phishing, synthetic identities, and automated disinformation scale at the same rate as that chart.
English
0
0
0
25