Mike Manrod

11.3K posts

Mike Manrod banner
Mike Manrod

Mike Manrod

@CroodSolutions

CISO and faculty by day, adversary emulation/tools by night, bad jokes and memes all the time.

Arizona, USA Katılım Haziran 2021
1.8K Takip Edilen2.1K Takipçiler
Sabitlenmiş Tweet
Mike Manrod
Mike Manrod@CroodSolutions·
@techspence really brings up a great topic here / adding my two cents. The first step in planning secure AI adoption, is scoping. The approach varies depending on if we are talking about deployment of agentic solutions, secure AI development, general employee use of GPTs in their work, AI features appearing AutoMagically in existing products, adversary use of AI, or ShadowAI taking almost all orgs by storm. This alone seems to hint toward an answer to the first question. Is it possible? Possible to make strides and do our best? Sure. Possible to make it through this unscathed? Not a chance. An old friend used to say, "Don't worry; nothing is going to be alright." That applies here. Can we come up with a strategy that avoids most of the worst outcomes, a lot of the time? Yes. Will the next half decade be a train wreck? Also, yes. Let's start with agentic, since that is at the very top of my AI risk metric, at least for the things we have some control over as defenders. The key here is control and isolation. As @christian_tail has pointed out, the policies applied as part of your harness are not a substitute for genuine isolation. For true freedom of movement for creators using agents, you want the harness to be in a VM or container with minimal/carefully thought-out trust boundaries between VM and host. Network isolation should restrict what it can access (and egress IP, so your main space does not get blacklisted if an agent acts stupid). And finally, account isolation has to be airtight. This also includes reused passwords, weak secrets, host configuration artifacts, and so on (as Spencer always points out). For trying to govern the random chat bots that teleport into each SaaS application, personal use of frontier models for random tasks, and the lower risk shadow AI, for the average org it is like the classic 80s song. Hold on loosely, but don't let go. If you try to keep crazy level of controls over all of this during such a time of great change, it will not go well. Don't get me wrong, if you make missiles or something, work out of a SCIF and lock down all the things. For the average org though, the risk of becoming obsolete is as great as any other risk during these times. Larger organizations looking for a way to safeguard data by using only company accounts, can use a SWG/SASE solution to front-end AI solutions, creating policies to allow the sanctioned and official AI sources, blocking personal accounts and unauthorized resources. And, don't get me wrong, everybody should do what they can here, but also maintain a realistic view of risk across the board and realize that this can be a lot of lift for risks that seem largely hypothetical. We see ransomware attacks, coming from infostealer / access broker activity all the time; whereas true catastrophic incidents due to models being trained with company data seem few and far between still (not counting the initial grift to make those models in the first place, since that has already happened). And if it is not PII or regulated/secret data, what is your real exposure? And for the tinfoil hat club afraid to upload your powerpoint slides for fear Anthropic or OpenAI will steal your intellectual property: Get over yourself. You are not Nikola Tesla. The super intelligent AI overlords are not going to be made so much smarter by your slides the closest competitor will put you out of business because your clip art on slide 23 was just that amazing and now the AI-gods have it. Unless you really are Nikola Tesla, in which case, DM me and lets be friends. Where the real destructive power comes in, is in ShadowAI and ShadowIT/software created by unauthorized use of AI. This will really wreck some shit. And, this is one of two use cases where the answer is, back to basics. And, now we really have to get the basics right. We need application control, to prevent unauthorized agentic-capable software harnesses/products from running where we do not want them. We need URL filtering and analysis of some sort to block the places we do not want users to go; we want principle of least privilege applied including LAPS/no local admin, we need proper segmentation, cleaned up domains/IAM implementations, and effective and validated business processes for high risk activities (from password reset to wire transfers). We need to clean up technical debt, and we need GRc processes that deal with real risk and not a bunch of goofy mumbo jumbo, not rooted in reality. POC or GTFO on any stated risk. And, where we do run agents, in addition to isolation, the new emerging category of AIDR / AI Detection and Response will become important. As we increasingly need to allow agents all over the place, we need block/allow decisions to become more contextual, and we need additional options besides just block or isolate and allow. And what about adversarial use of AI? It is really these same things - we need to get the basics right, although I will also add that patching lifecycles will also need to become more efficient, effective and automated (but not too automated; reference npm). The hype around Mythos has done us a great disservice though, because it has focused everyone on finding obscure zero days in source code, which does not materially change the game that much, aside from the need to patch more things faster. This is a huge problem, because it buries the lead, of how adversaries really are going to use this to great effect, which is to employ agents, with hacking skills, to exploit known things with much greater speed, efficiency, and effectiveness. They are not going to take the source code for some project and find some obscure zero day, to get in and ransom you (usually), they will apply agentic harnesses to take advantage of the fact that your password is Spring2026!, the user is local admin, and there are ADCS templates misconfigurations allowing a speed run to DA. The exotic and cool zero day stuff will still happen and it will be important; it is just not the lowest token cost to yield the highest number of compromises possible in most cases, so the tokenomics of it do not exactly line up toward zero day brute forcing of flaws being the main use of these powerful models. It will be a lot of the same old things, done better and faster. Finally, how do we do secure development, in an AI-centric and agentic world? With flows and specialized harnesses trained and constrained to specific types of tasks. One stage of a pipeline will allow for development, then another will have agents inspect and report prior to promoting code, with human tollgates at key places and when certain conditions are met. Some of these conditions need to be classic decision gates that are hard-coded, vs agentic vibe gates. We live in a cool and scary time, with lots of fun and exciting work to do. And, this is just a few thoughts from one random security dude, so really curious what others think, including a few experts I have looped in who have forgotten more than I will ever know on this topic. Thank you Spencer for starting this great discussion. Sorry for the text wall @UK_Daniel_Card @Shammahwoods @Jhaddix @0xBoku @HackingDave @kuzushi @ZackKorman
spencer@techspence

Securely adopting AI.... Is it even possible? How should IT/Security leaders be thinking about this? I have my own ideas but I'm not as deep as many of you. Would love some perspectives on this. Planning to do a podcast on this soon.

English
4
3
15
5.5K
Dave Kennedy
Dave Kennedy@HackingDave·
Seven customer calls. Did two blogs, a video for customers, a customer email going out next week. Met with multiple SOC analysts on what they need from NightBeacon to make it better. Got those done and into prod. 23 merge requests. Fixed a connector issue with CrowdStrike. Closed three deals. Solid day.
GIF
English
6
0
38
1.9K
Mike Manrod retweetledi
Nathan McNulty
Nathan McNulty@NathanMcNulty·
I haven't shared much about nodoc recently, but I have made some pretty significant improvements :) 20 undocumented APIs from about a dozen different portals You can browse it at nodoc.nathanmcnulty.com For agents, point them at the OpenAPI specs: github.com/nathanmcnulty/…
Nathan McNulty tweet media
imog@imog

@NathanMcNulty so low effort i was surprised it worked. I owe you a beer. Maybe that other guy too. 1st is 13 failed approaches, 2nd is working Awareness of your repo contents saved me from having to go into f12

English
3
12
76
6.8K
Mike Manrod
Mike Manrod@CroodSolutions·
@CtgIntelligence @Selyst Between cpanel, all of the Linux privilege escalation attacks coming out almost daily, and all of the npm supply chain attacks it is hard for anything to really get the attention it deserves right now. And, we are still within a week of Instructure/Canvas. Lol What a year!!!
GIF
English
0
0
1
19
Mike Manrod
Mike Manrod@CroodSolutions·
@vxunderground Is this going to be a future project - see how many spiders you can fit in a tower system?
English
0
0
0
24
Mike Manrod
Mike Manrod@CroodSolutions·
@DecryptedTech Wait, am I the only one with a daily checklist that starts with disabling ASLR so random obscure exploits will work in demo land? That was not supposed to be my first soar playbook?
English
1
0
1
21
Mike Manrod
Mike Manrod@CroodSolutions·
Or is it maybe, powerful and influential, **with** the AI tools? I am not sure how relevant humans will be in tech without AI over the next couple decades, but also I am not sure AIs will be that relevant (for now) without humans. I basically agree with what you say above, in that people who master dealing with people and also technology, will have a compelling role in this new future. The bar for "Technology Skills" is about to go up a lot, and very rapidly. We need to seriously consider digital divide again and think through the broader implications of a relative minority increasing in relevance, while a larger slice decrease in relevance. Will the bar that used to be: "hit a few things and get a job" - transition to, more closely align with the chances of being an NBA/NFL draft pick? A society where the majority of people do not have a meaningful way to contribute will fail, or drift dystopian. And, that is exactly the frame for society that seems most likely to unfold, if we do not get creative and design something better. Thoughts? The time to invent something helpful is now.
English
0
0
1
21
SecInterviewHub
SecInterviewHub@sec_hub93028·
If you have: Communication Skills + Technical Skills then you will be more powerful and influential than any AI tool.
English
2
1
7
214
Mike Manrod retweetledi
vx-underground
vx-underground@vxunderground·
Microsoft: PowerShell is simple and easy to use. Actual PowerShell command: Remove-MgIdentityAuthenticationEventFlowAsOnGraphAPretributeCollectionExternalUserSelfServiceSignUpAttributeIdentityUserFlowAttributeByRef No, this isn't a joke. This was noted by @NathanMcNulty
vx-underground tweet media
English
160
320
4.9K
209.9K
Andy Swift
Andy Swift@SwiftSecur1·
Well thats a new one....client wants to move the reporting day to...before the test....🤷
English
3
0
11
606
Jahir Sheikh
Jahir Sheikh@jahirsheikh8·
Senior backend interview question: CPU usage jumps to 100% every night at 3:17 AM. No cron jobs. No deployments. No traffic spike. What are you checking first?
English
502
51
2.8K
1.3M
Mike Manrod retweetledi
Tib3rius
Tib3rius@0xTib3rius·
Last minute talk at @bsides312 confirmed. 😅 Should be a fun one though.
Tib3rius tweet media
English
3
4
35
1.9K
Mike Manrod retweetledi
MikeTalonNYC
MikeTalonNYC@MikeTalonNYC·
Maybe because companies are regularly laying off thousands of employees with the publicly stated reason being so they could invest in AI?
Syed Ijlal Hussain@sijlalhussain

📍 High AI adoption is not reducing workforce anxiety. In many cases, it is accelerating it. As BCG highlights, some of the countries with the highest rates of regular GenAI usage also report the highest fears around job loss. The closer employees get to AI-enabled workflows, the clearer the organizational consequences become. 1️⃣ Structural Shift: Frequent exposure to AI changes how employees view task ownership, workflow coordination, and long-term role stability. Anxiety rises when people can directly observe capability substitution. 2️⃣ Governance Gap: Most organizations are accelerating deployment faster than they are redesigning workforce transition models. Employees see implementation velocity, but not a credible path for role evolution. 3️⃣ Talent Implication: High-adoption environments may increasingly split workforces into AI orchestrators and execution-heavy roles vulnerable to compression. That changes promotion pathways, training priorities, and internal mobility structures. This is why AI confidence and AI fear are now rising at the same time. The real workforce challenge is not whether AI will augment work. It is whether organizations can redesign career structures fast enough to prevent large-scale trust erosion inside the workforce. via BCG buff.ly/ub8Y0k0 @TCyberCast @sulefati7 @bulbi59 @corixpartners @bbailey39 @NathaliaLeHen @harbi_nh @Corix_JC @Transform_Sec @bociek191905 @Alovesublime @YalaCoder @kkruse @Yash_ai6 @DioOmega @EduardoValenteI @ozsilverfox @jameslhbartlett @giuliog @michaeldacosta @marmelyr @arigatou163 @O_Berard @faryus88 @ILoveBooks786 @RLDI_Lamy @VivMilanoFSL @FrRonconi @ramonvidall @ricardo_ik_ahau @olivierfroggy @kachofugetsujp @pchamard

English
0
1
4
150
Mike Manrod retweetledi
Mike Manrod retweetledi
chompie
chompie@chompie1337·
Claude helped me with this bug too but in a different way... Tried to gaslight me saying it wasn’t ~exploitable in practice~ and I got obsessed with proving it wrong 😩
TrendAI Zero Day Initiative@thezdi

Confirmed! @chompie1337 of IBM X-Force Offensive Research (XOR) used a race condition to escalate privileges on Red Hat Enterprise Linux for Workstations, earning $20,000 and 2 Master of Pwn points. #Pwn2Own #P2OBerlin

English
40
88
1.2K
58.1K
Mike Manrod retweetledi
TrendAI Zero Day Initiative
Confirmed! @chompie1337 of IBM X-Force Offensive Research (XOR) used a race condition to escalate privileges on Red Hat Enterprise Linux for Workstations, earning $20,000 and 2 Master of Pwn points. #Pwn2Own #P2OBerlin
TrendAI Zero Day Initiative tweet mediaTrendAI Zero Day Initiative tweet mediaTrendAI Zero Day Initiative tweet media
English
11
40
545
82.2K
Mike Manrod
Mike Manrod@CroodSolutions·
@techspence This is an improvement. I have been meaning to submit/propose a list of changes.
English
1
0
2
60
Mike Manrod retweetledi
spencer
spencer@techspence·
🔴The MITRE ATT&CK framework has undergone slight changes. The Defense Evasion tactic has been broken up into 2 tactics now. Stealth - TA0005 Defense Impairment - TA0112
spencer tweet media
English
1
8
40
1.9K