Accidental CISO

36.2K posts

Accidental CISO banner
Accidental CISO

Accidental CISO

@AccidentalCISO

I accidentally became the CISO. I didn't want this job, but the job chose me. I'm scared, and I want to go home.

United States Katılım Ocak 2019
2.1K Takip Edilen58.8K Takipçiler
Accidental CISO
Accidental CISO@AccidentalCISO·
@IT_unhinged As a DevOps manager many years ago, there was a VP that had my team constantly chasing ghosts because he had access to a dashboard that he didn't understand. 🤣😭
English
2
0
75
4.9K
Derek Devicemanager
Derek Devicemanager@IT_unhinged·
Our CTO asked for a “single pane of glass dashboard” that shows literally everything happening in IT. I told him that’s impossible without significant architecture changes and at least 2 new platforms. That’s a lie. I already have a single pane of glass: it’s a browser tab with our monitoring tool and 10 custom filters. If he got access, he’d start asking questions like “why is CPU at 92% here” and “what’s this alert.” Then I’d have to explain, and explaining is unpaid emotional labor. So I built him a fake dashboard in PowerPoint. The graphs are just animated GIFs looping the same fake data forever. He stares at it in meetings and says things like “I can see our resilience story improving in real time.” Everyone is dumb except me. I should get a raise.
English
33
59
1.8K
112.9K
Accidental CISO
Accidental CISO@AccidentalCISO·
Congratulations, sir. Since you are the CEO, you get to decide which AI will replace you.
English
0
0
7
602
Accidental CISO
Accidental CISO@AccidentalCISO·
At 3:00pm Eastern, catch the livestream of my call-in-radio style podcast! @ZackKorman joins me to answer questions about how organizations can learn about AI safely. We’ll take questions/comments from the live audience. Focivity dot com slash podcast for links!!
English
1
1
4
818
Accidental CISO
Accidental CISO@AccidentalCISO·
A good friend of mine was affected by the layoffs at Cisco. If anyone needs a fantastic graphic designer with deep UX, security industry, and startup experience, ping me. Austin based.
English
0
4
4
1.2K
Accidental CISO retweetledi
Mike Manrod
Mike Manrod@CroodSolutions·
Did anyone else experience complete system failures with Splunk (cloud), following the rollout of their AI offering? It tanked our performance at first, skipping 48% of our searches, and then ultimately crashed the entire environment. After disabling their AI and restarting, everything returns to normal. Unfortunately, their AI service, keeps turning itself back on, causing the problem to repeat, so it is necessary to keep disabling the app. Lol 🤦‍♂️ Is anyone else seeing this? If your Splunk instance took an abrupt nose dive, be sure to turn off the AI app. @UK_Daniel_Card @AccidentalCISO @sec_hub93028 Also, timing of this is hilarious, since my discussion w/ @techspence below just referenced AI features that appear AutoMagically in products bringing risks and issues. Did Splunk vibe code their vibe app? Special thanks to Cyrus Duncan and our SOC team for their hard work on this one.
Mike Manrod@CroodSolutions

@techspence really brings up a great topic here / adding my two cents. The first step in planning secure AI adoption, is scoping. The approach varies depending on if we are talking about deployment of agentic solutions, secure AI development, general employee use of GPTs in their work, AI features appearing AutoMagically in existing products, adversary use of AI, or ShadowAI taking almost all orgs by storm. This alone seems to hint toward an answer to the first question. Is it possible? Possible to make strides and do our best? Sure. Possible to make it through this unscathed? Not a chance. An old friend used to say, "Don't worry; nothing is going to be alright." That applies here. Can we come up with a strategy that avoids most of the worst outcomes, a lot of the time? Yes. Will the next half decade be a train wreck? Also, yes. Let's start with agentic, since that is at the very top of my AI risk metric, at least for the things we have some control over as defenders. The key here is control and isolation. As @christian_tail has pointed out, the policies applied as part of your harness are not a substitute for genuine isolation. For true freedom of movement for creators using agents, you want the harness to be in a VM or container with minimal/carefully thought-out trust boundaries between VM and host. Network isolation should restrict what it can access (and egress IP, so your main space does not get blacklisted if an agent acts stupid). And finally, account isolation has to be airtight. This also includes reused passwords, weak secrets, host configuration artifacts, and so on (as Spencer always points out). For trying to govern the random chat bots that teleport into each SaaS application, personal use of frontier models for random tasks, and the lower risk shadow AI, for the average org it is like the classic 80s song. Hold on loosely, but don't let go. If you try to keep crazy level of controls over all of this during such a time of great change, it will not go well. Don't get me wrong, if you make missiles or something, work out of a SCIF and lock down all the things. For the average org though, the risk of becoming obsolete is as great as any other risk during these times. Larger organizations looking for a way to safeguard data by using only company accounts, can use a SWG/SASE solution to front-end AI solutions, creating policies to allow the sanctioned and official AI sources, blocking personal accounts and unauthorized resources. And, don't get me wrong, everybody should do what they can here, but also maintain a realistic view of risk across the board and realize that this can be a lot of lift for risks that seem largely hypothetical. We see ransomware attacks, coming from infostealer / access broker activity all the time; whereas true catastrophic incidents due to models being trained with company data seem few and far between still (not counting the initial grift to make those models in the first place, since that has already happened). And if it is not PII or regulated/secret data, what is your real exposure? And for the tinfoil hat club afraid to upload your powerpoint slides for fear Anthropic or OpenAI will steal your intellectual property: Get over yourself. You are not Nikola Tesla. The super intelligent AI overlords are not going to be made so much smarter by your slides the closest competitor will put you out of business because your clip art on slide 23 was just that amazing and now the AI-gods have it. Unless you really are Nikola Tesla, in which case, DM me and lets be friends. Where the real destructive power comes in, is in ShadowAI and ShadowIT/software created by unauthorized use of AI. This will really wreck some shit. And, this is one of two use cases where the answer is, back to basics. And, now we really have to get the basics right. We need application control, to prevent unauthorized agentic-capable software harnesses/products from running where we do not want them. We need URL filtering and analysis of some sort to block the places we do not want users to go; we want principle of least privilege applied including LAPS/no local admin, we need proper segmentation, cleaned up domains/IAM implementations, and effective and validated business processes for high risk activities (from password reset to wire transfers). We need to clean up technical debt, and we need GRc processes that deal with real risk and not a bunch of goofy mumbo jumbo, not rooted in reality. POC or GTFO on any stated risk. And, where we do run agents, in addition to isolation, the new emerging category of AIDR / AI Detection and Response will become important. As we increasingly need to allow agents all over the place, we need block/allow decisions to become more contextual, and we need additional options besides just block or isolate and allow. And what about adversarial use of AI? It is really these same things - we need to get the basics right, although I will also add that patching lifecycles will also need to become more efficient, effective and automated (but not too automated; reference npm). The hype around Mythos has done us a great disservice though, because it has focused everyone on finding obscure zero days in source code, which does not materially change the game that much, aside from the need to patch more things faster. This is a huge problem, because it buries the lead, of how adversaries really are going to use this to great effect, which is to employ agents, with hacking skills, to exploit known things with much greater speed, efficiency, and effectiveness. They are not going to take the source code for some project and find some obscure zero day, to get in and ransom you (usually), they will apply agentic harnesses to take advantage of the fact that your password is Spring2026!, the user is local admin, and there are ADCS templates misconfigurations allowing a speed run to DA. The exotic and cool zero day stuff will still happen and it will be important; it is just not the lowest token cost to yield the highest number of compromises possible in most cases, so the tokenomics of it do not exactly line up toward zero day brute forcing of flaws being the main use of these powerful models. It will be a lot of the same old things, done better and faster. Finally, how do we do secure development, in an AI-centric and agentic world? With flows and specialized harnesses trained and constrained to specific types of tasks. One stage of a pipeline will allow for development, then another will have agents inspect and report prior to promoting code, with human tollgates at key places and when certain conditions are met. Some of these conditions need to be classic decision gates that are hard-coded, vs agentic vibe gates. We live in a cool and scary time, with lots of fun and exciting work to do. And, this is just a few thoughts from one random security dude, so really curious what others think, including a few experts I have looped in who have forgotten more than I will ever know on this topic. Thank you Spencer for starting this great discussion. Sorry for the text wall @UK_Daniel_Card @Shammahwoods @Jhaddix @0xBoku @HackingDave @kuzushi @ZackKorman

English
3
4
14
3.5K
Accidental CISO
Accidental CISO@AccidentalCISO·
@endingwithali That actually sounds like a kickass first date. Who wouldn't want to drive a forklift!?
English
0
0
0
205
ali
ali@endingwithali·
I think I’m willing to give up on the fact that a man will not take me to get fork lift certified as a first date Instead I am willing to settle on a picnic but it’s us, a whole rotisserie chicken each, a stack of 100 unhinged questions (generated by our AI of choice using our account history), and a bottle of wine (or case of spindrift).
English
14
1
93
6.6K
Dave Kennedy
Dave Kennedy@HackingDave·
For this weekend I rented an RV with my airsoft boys… get to the facility … tight driveway, hit a branch.. rips off the entire left mirror completely off. So… no left mirror and in South Carolina in a big 30ft rv. So I run to auto zone and get a tiny mirror that I can roll window down to see to my left while driving.. While driving home on freeway half way in, lose grip of mirror out the window .. it’s gone. So we pry the broke mirror off it so I could use it the rest of the ride 😆
English
16
2
73
8.1K
Accidental CISO
Accidental CISO@AccidentalCISO·
I must be getting burned out again. My cynical sense of humor is reemerging. 👀
English
1
0
13
851
Accidental CISO
Accidental CISO@AccidentalCISO·
Microsoft Word has been making some strange grammar “correction” suggestions lately. Anyone else noticing this?
English
0
0
4
1.4K
Accidental CISO retweetledi
Zack Korman
Zack Korman@ZackKorman·
Soon I’m going to be able to talk about my startup, Embroidery, and what we do. But I need to ask for help. I’m trying to build an AI cybersecurity company. That means I’m up against giant vendors that lie, cheat, and fear-monger their way to the top. I can’t beat that alone. This industry has so many problems and we deserve better, but the only way to make it better is to beat the people who make it bad. That means I need help. That doesn’t mean buying my product. It means doing what you can, big or small: - If you see that my product might be useful to your company, help get me a meeting. - If you know someone it might help, help put me in touch. - If you don’t know anyone, help me with feedback. I need so much input from people. I’m always happy to jump on a call to talk no matter who you are or what you do. - And if nothing else, just reply to my posts to say you don’t hate me. That helps me not quit. I’ll post next week about what we are building, but I wanted to say this now. It’s awkward having to ask for help from people, but I don’t stand a chance without it. If you can help me, please know it means the world to me.
English
86
44
298
25.6K
Accidental CISO
Accidental CISO@AccidentalCISO·
@HackingDave All that money exchanging hands is the fan that keeps the bounce house that is the global economy going. People talk about the government and banks "printing money" but companies do the same thing through the supply chain / value chain. That profit is money being made.
English
1
0
1
84
Dave Kennedy
Dave Kennedy@HackingDave·
Great read
Yasir Ai@AiwithYasir

🚨BREAKING: Two researchers from UPenn and Boston University just published a paper that should be uncomfortable reading for every CEO automating their workforce right now. The argument is straightforward. Every company replacing workers with AI is also eliminating its own future customers. Laid off workers stop spending. Enough of them stop spending and nobody can afford to buy anything. The companies that fired everyone end up selling into an economy with no purchasing power left. Every executive can see this. The math is not complicated. But here is why nobody stops. If you do not automate, your competitor does. They cut costs, lower prices, take your market share, and you collapse anyway. So every company automates knowing it is collectively destructive because the alternative is dying alone while everyone else survives. The researchers proved this is a Prisoner's Dilemma playing out in real time. The numbers are already moving. Block cut nearly half its 10,000 employees this year. Jack Dorsey said AI made those roles unnecessary and that within the next year the majority of companies will reach the same conclusion. Salesforce replaced 4,000 customer support agents with AI. Goldman Sachs deployed a coding tool that lets one engineer do the work of five. Over 100,000 tech workers were laid off in 2025 and AI was cited as the primary driver in more than half those cases. 80% of US workers hold jobs with tasks susceptible to AI automation. The researchers tested every proposed solution. Universal basic income does not change a single company's incentive to automate. Capital income taxes adjust profit levels but not the per-task decision to replace a human. Collective bargaining cannot hold because automating is always the dominant strategy. They also identified what they call a Red Queen effect. Better AI does not solve the problem, it accelerates it. Every company chases faster automation to gain market share over rivals but at the end everyone has automated equally, the gains cancel out, and the only thing left is more destroyed demand. The one thing the math says could work is a Pigouvian automation tax. A per-task charge that forces companies to account for the demand they destroy each time they replace a worker. The conclusion is that this is not a transfer of wealth from workers to owners. Both sides lose. Workers lose income. Companies lose customers. It is a deadweight loss with no market mechanism to stop it on its own. (Link in the comment)

English
16
23
144
65.7K