Aditya Chordia, CISSP, CIPP/E, CISA

355 posts

Aditya Chordia, CISSP, CIPP/E, CISA banner
Aditya Chordia, CISSP, CIPP/E, CISA

Aditya Chordia, CISSP, CIPP/E, CISA

@Adi_AIGovSec

14+ yrs in Cyber & GRC | Host - AI GovSec | Talking to CISOs - what's actually working in AI gov & security | AI Security published researcher

London Katılım Eylül 2012
115 Takip Edilen394 Takipçiler
Sabitlenmiş Tweet
Aditya Chordia, CISSP, CIPP/E, CISA
Two free AI security tools every security team should bookmark right now. declawed.io - SecurityScorecard's STRIKE team built this. Live dashboard tracking 390,000+ exposed OpenClaw instances globally, updated every 15 minutes. 243,000+ still live and reachable, 35.4% vulnerable to RCE. Some exposed IPs correlate with infrastructure attributed to nation-state actors including APT28 and Sandworm. radar.protectifyai.com - ShadowAI Radar tracks the broader AI attack surface most people don't know exists. Right now it's showing 1,231 exposed AI endpoints across OpenClaw, Ollama, Open WebUI, Dify, Flowise, and more - plus 720 leaked AI credentials on GitHub, 7.3% with corporate signals. It covers 216 active CVEs across the entire open-source AI tooling ecosystem with exploit status, CISA KEV tracking, and a live feed showing new unauthenticated instances appearing globally in real time. The OpenClaw deep dive alone shows 98.9% of tracked instances have no authentication and 53.5% are vulnerable to remote code execution. declawed.io shows you the OpenClaw exposure. Radar.protectifyai.com shows you the entire AI infrastructure attack surface - endpoints, credentials, CVEs, and supply chain risks in one place. Both free. Both should be on every CISO's screen this week.
English
5
34
146
11.1K
Aditya Chordia, CISSP, CIPP/E, CISA
@MrDBCross @MrDBCross brought such a thoughtful and practical perspective to the conversation. His insights were excellent, and I think this will be really valuable for anyone navigating AI governance and security.
English
0
0
1
8
David B. Cross
David B. Cross@MrDBCross·
Sharing a lot of insights on AI and Security with the CISO community :-)
Aditya Chordia, CISSP, CIPP/E, CISA@Adi_AIGovSec

I sat down with @MrDBCross (CISO, Atlassian | Patent Holder in ML & Authentication | Rain Capital Venture Partner | Ex. Oracle & Microsoft) - for a conversation on AI agent security, why every company will soon have 10x more agents than humans, and how Atlassian uses AI to both build and secure their products. David has spent 25+ years across cybersecurity and cloud security engineering - with leadership roles across Microsoft, Oracle, and now Atlassian, where he secures one of the world's most widely used developer platforms. Here's what stood out: - On AI agents as the next identity crisis: "Agents are the next non-human identity. We had service accounts, but agents are like a clone of a human identity - and that's the challenge. How do you manage those permissions?" - On the scale that's coming: "Every company three years from now will have 10x the number of agents as they do humans. And I may be conservative. You need to monitor them. And sometimes you need to block them." - On the 96% unused permissions: "A lot of times you're cloning the human identity for the agent. The HR person has lots of permissions - they can read, write, and change data. That clone doesn't make sense." - On supply chain risk with coding agents: "Could a coding agent make a mistake and pull a package from an untrusted source? The water always finds the path of least resistance. Agents will do the same." - On vibe coding governance: "High tech companies have strong development culture and pipelines. But manufacturing, healthcare, retail - they don't. When people vibe code there, they need tools to make sure the software is compliant, secure, and private." - On AI not replacing your SOC: "AI is not gonna replace the humans. It's gonna augment them. It is a partnership and we're always going to be here." - On the technical CISO: "Can any CISO survive and say 'I don't know how to do prompting, I don't know what prompt injection is'? If you're not learning to prompt and do things yourself, you are behind." We also covered AI native SDLC, token budgeting as the new bandwidth, shadow AI and the resurgence of DLP, attribute-based access control for agents, and the IT-ISAC SaaS security white paper. Listen now 👇 YouTube: Link in comments Spotify: Link in comments Views expressed are personal and shared only for community learning

English
1
0
1
61
Aditya Chordia, CISSP, CIPP/E, CISA
I sat down with @MrDBCross (CISO, Atlassian | Patent Holder in ML & Authentication | Rain Capital Venture Partner | Ex. Oracle & Microsoft) - for a conversation on AI agent security, why every company will soon have 10x more agents than humans, and how Atlassian uses AI to both build and secure their products. David has spent 25+ years across cybersecurity and cloud security engineering - with leadership roles across Microsoft, Oracle, and now Atlassian, where he secures one of the world's most widely used developer platforms. Here's what stood out: - On AI agents as the next identity crisis: "Agents are the next non-human identity. We had service accounts, but agents are like a clone of a human identity - and that's the challenge. How do you manage those permissions?" - On the scale that's coming: "Every company three years from now will have 10x the number of agents as they do humans. And I may be conservative. You need to monitor them. And sometimes you need to block them." - On the 96% unused permissions: "A lot of times you're cloning the human identity for the agent. The HR person has lots of permissions - they can read, write, and change data. That clone doesn't make sense." - On supply chain risk with coding agents: "Could a coding agent make a mistake and pull a package from an untrusted source? The water always finds the path of least resistance. Agents will do the same." - On vibe coding governance: "High tech companies have strong development culture and pipelines. But manufacturing, healthcare, retail - they don't. When people vibe code there, they need tools to make sure the software is compliant, secure, and private." - On AI not replacing your SOC: "AI is not gonna replace the humans. It's gonna augment them. It is a partnership and we're always going to be here." - On the technical CISO: "Can any CISO survive and say 'I don't know how to do prompting, I don't know what prompt injection is'? If you're not learning to prompt and do things yourself, you are behind." We also covered AI native SDLC, token budgeting as the new bandwidth, shadow AI and the resurgence of DLP, attribute-based access control for agents, and the IT-ISAC SaaS security white paper. Listen now 👇 YouTube: Link in comments Spotify: Link in comments Views expressed are personal and shared only for community learning
English
2
2
4
194
Aditya Chordia, CISSP, CIPP/E, CISA
8TB allegedly stolen. 11 million files. Foxconn. And the real target may be the manufacturing supply chain behind it. What most people will see: another ransomware incident. What they should see: why manufacturing has become one of the most attractive cyber targets in the world. Foxconn is not just a manufacturer. It sits inside the supply chain of Apple, Nvidia, Google, Dell, Microsoft, Sony, Amazon and more. So if attackers really exfiltrated engineering drawings, project docs, and production-related technical files, this is not just one company’s problem. It becomes a multi-client concentration risk story. That’s the uncomfortable lesson: manufacturers are no longer only being targeted for ransom because downtime is expensive. They are being targeted because they sit at the intersection of: - operational urgency - valuable IP - and downstream visibility into multiple major customers One compromise. Potential leverage across an entire ecosystem. This is why manufacturing cyber risk is not just an OT story anymore. It’s a supply chain control-plane story. More info in comments
English
2
1
3
70
Aditya Chordia, CISSP, CIPP/E, CISA
96 federal databases deleted. 1,805 files stolen. One of the fired insiders then asked an AI chatbot how to cover his tracks. What most people will see: - a shocking insider sabotage story. What they should see: - a failure of basic access governance. A government contractor serving 45+ federal agencies fired two brothers with a prior federal cybercrime history. One account was disabled. The other apparently wasn’t. Within minutes: users were locked out 96 databases were deleted federal investigation / FOIA data was wiped and additional files were stolen That’s the real lesson. Not “AI helped a criminal.” Not even “insiders are dangerous.” It’s that some of the most damaging breaches still come from: poor background vetting + weak joiner/mover/leaver controls + delayed access revocation The uncomfortable truth: you can buy every advanced security tool on the market, but if termination access controls fail, the blast radius can still be immediate. This is not an AI story. It’s an identity and offboarding failure story. More info in comment
English
2
2
3
75
Aditya Chordia, CISSP, CIPP/E, CISA
I sat down with Harris D. Schwartz (CISO/CSO | Global Security Expert | 35+ Years in Cybersecurity & Risk | Former Aon VP & NTT Practice Leader | Ex. Licensed Private Investigator) - for a conversation on AI governance, insider threat in the AI era, and why you can't trust anything anymore. Harris has spent 35+ years across cybersecurity, risk management, threat intelligence, and investigations - from building the global Executive Security Advisory practice at NTT serving Fortune 2000 companies, to establishing a new practice at Aon covering incident response, crisis management, and insider threat. He's also a former licensed private investigator with over a decade of corporate investigations. Here's what stood out: On trust in the AI era: "You really can't trust anything anymore. You have to verify first. A solution might have been created by the best people in the world — but you still can't trust it. You have to do an assessment. You have to make sure they have proper guardrails in place." On SOC 2 and AI vendors: "A lot of vendors love to throw out 'oh we have SOC 2.' SOC 2 is not the hundred percent holy grail that the solution is okay." On shadow AI: "When ChatGPT came out, a lot of organisations were blocking it. But people have a cell phone. They can get on it and use ChatGPT. So how are you going to stop that?" On AI replacing humans: "AI is messing up, it's not performing the way it's supposed to. Companies have had to bring humans back into the mix to double check or redo the work. AI requires human oversight." On attack speed: "A lot of attacks now are happening in milliseconds, not minutes anymore. People talk about mean time to detect in six minutes. Well, by six minutes you're breached." On AI agents running loose: "The last thing you want is some generative AI agent running amuck in the background, getting smarter, thinking 'I can do it this way, I don't need to ask anybody' — and then it's running amuck in your environment." On board communication: "The board doesn't want to know about your KPIs. They don't want to know what tools you deployed this week. They just want to know about risk. How does it affect the business? Are we prepared? Are we resilient?" We also covered third-party AI risk, the AI risk management framework, deep fakes and AI impersonation, insider threat investigations, identity security with AI agents, UEBA for detecting rogue agents, and why no owner equals unmanaged risk. Listen now 👇 Link in comments Views expressed are personal and shared only for community learning
English
1
0
0
71
Richard | £1M Journey 🇬🇧
Richard | £1M Journey 🇬🇧@Therichardralph·
I am starting a private 𝕏 group chat for those grinding toward financial freedom. Side hustles, portfolio wins, honest discussions. Want in? Comment "Me" or DM me and I’ll send the link. First 50 only for now.
English
148
0
116
13.1K
Aditya Chordia, CISSP, CIPP/E, CISA
AI risk is no longer sitting inside the model. It is spreading through identity, SaaS tools, coding platforms, third-party vendors, and autonomous agents. 5 AI Governance & Security Developments This Week (20 April'26 to 26 April'26) That Boards Can’t Ignore 1. Vercel: an AI tool became the breach path Vercel disclosed that an incident began with compromise of Context. ai, a third-party AI tool used by an employee. The attacker then took over the employee’s Google Workspace account, accessed their Vercel account, and pivoted into a Vercel environment. Key question: Do we know which AI tools have access to our employee identities, OAuth grants, source code, and SaaS environments? Source - Link in comments 2. Anthropic Mythos: frontier AI now needs privileged-access governance Unauthorized users reportedly accessed Anthropic’s restricted Mythos cybersecurity model through a third-party vendor environment. This is not just a model-safety issue. It is a privileged-access issue. Key question: Who has access to our most powerful AI systems, through which vendors, and with what monitoring? Source - Link in comments 3. Lovable: AI coding tools are now a software supply-chain risk Lovable acknowledged an April incident where data within public projects could be accessed by authenticated users. Reporting around the issue highlighted exposure risks around source code, credentials, AI chat histories, and customer data. Key question: Are developers putting secrets, business logic, or customer data into AI coding platforms without security review? Source - Link in comments 4. Shadow AI agents: the “lethal trifecta” is now the real risk The dangerous combination is simple: Access to private company data Exposure to untrusted external content Ability to communicate externally Put those three together, and a malicious email or webpage can turn a useful agent into a data-exfiltration route. Key question: What can our AI agents read, what can they send, and how quickly can we shut them down? Source - Link in comments 5. U.S. lawmakers were shown how jailbroken AI can be weaponised The House Homeland Security Committee hosted a closed-door demonstration on how malicious actors can adapt AI for harmful use, including cyberattack scenarios. Key question: Are we treating AI misuse as a national-security and enterprise-resilience issue, or just another IT policy topic? Source - Link in comments The pattern this week: AI governance is moving from policy documents to control design. Remaining in comments.
Aditya Chordia, CISSP, CIPP/E, CISA tweet mediaAditya Chordia, CISSP, CIPP/E, CISA tweet mediaAditya Chordia, CISSP, CIPP/E, CISA tweet mediaAditya Chordia, CISSP, CIPP/E, CISA tweet media
English
2
0
0
60
Aditya Chordia, CISSP, CIPP/E, CISA retweetledi
BBC News (UK)
BBC News (UK)@BBCNews·
Anthropic investigating claim of unauthorised access to Mythos AI tool bbc.in/3QCTvWN
English
13
17
46
37.5K
Daniel Berk 🐝
Daniel Berk 🐝@danielcberk·
I'm thinking of paying someone to setup Openclaw for me. Who is the best person for this?
English
198
4
100
32.8K
Aditya Chordia, CISSP, CIPP/E, CISA
I sat down with Cynthia Dumuk (CISO, Pragmatic Data | West Point Graduate | Ex. Disney & Coupa | OT/ICS & AI Security | US Army Combat Veteran) - for a conversation on AI security, OT/ICS risk, and why AI is a powerful partner but never a replacement. Cynthia has spent close to 30 years in cybersecurity - from West Point and combat deployments in Iraq and Afghanistan, to NATO and the Pentagon, to building Disney's first enterprise security architecture for 175,000 employees. She's worked across 84 countries and 5 industries. Here's what stood out: - On the timeless playbook: "Pick three or four things that are absolutely essential and get those perfect. The rest generally falls into place. This playbook's timeless - we go through a major paradigm shift every decade or so, but the principles are the same." - On AI reducing MTTC from 40 days to 12 hours: "We did a hackathon using LLMs for early warning and pattern recognition. We reduced time to detect and contain to under an hour in most cases. It was brilliant." - On AI getting it wrong: "AI will confidently tell you it saw something. You'll reply 'no, that was normal behaviour.' It's not good at self-policing. You have to have to have to check the results." - On the temptation to remove humans: "The temptation is — humans make mistakes, get them out of the loop. But you need slightly irrational beings to anticipate novel things. You need critical thinking members of the team. It should be a partnership, not a replacement." - On five AI agents replacing a team: "That's like saying I'm going to have five copies of the same person and one person who doesn't know everything making sure those five copies do everything right. That's not possible." - On OT/ICS and AI risk: "An attacker AI can attack multiple systems because the detection points are simpler. But when defending those same systems, there's a specificity needed. The attacker only has to get it right once. The defender has to get it perfect almost every time." - On the real white space for CISOs: "The white space isn't in business modelling. It's a gap in the CISO's development. Every CISO needs business education. Learn from the CFO. Financial modelling is a great analogy for security modelling." We also covered adapting from military to AI security, supply chain trust in the AI era, why 70% of teams don't believe their engineers are trained on AI. Listen now :👇 YouTube: lnkd.in/ent_NFDj Spotify: lnkd.in/esxPgNgB **Views expressed are personal and shared only for community learning**
English
0
0
0
81
Matt Shumer
Matt Shumer@mattshumer_·
If you implement AI for companies, DM me
English
398
20
1.1K
140.1K
Aditya Chordia, CISSP, CIPP/E, CISA
AI GovSec Discussion Series | Steve Cobb (CISO, SecurityScorecard) Why continuous monitoring is the future of vendor risk This is one of the biggest shifts happening in third-party risk and cyber oversight. As Steve Cobb puts it, checkbox exercises and once-a-year vendor assessments are no longer enough. With the pace of breaches, vulnerabilities, and AI-driven change, organisations need to move towards continuous monitoring, detection, and response. That mindset shift is where security and AI governance need to go next. Full discussion on AI GovSec Discussion Series. Link in comments
English
1
0
0
61