Bret Piatt

10.4K posts

Bret Piatt banner
Bret Piatt

Bret Piatt

@bpiatt

Advisor, entrepreneur, investor, and non-profit Board member at @TexasLyceum & @Geekdom; retired host of @CyberTalkRadio.

San Antonio, Tx Katılım Ocak 2009
2.7K Takip Edilen4.8K Takipçiler
Sabitlenmiş Tweet
Bret Piatt
Bret Piatt@bpiatt·
Looking to discuss cybersecurity or AI with an expert? I've blocked off a few slots per month where I'm doing 30 minutes for $30. Bring your topic or issue and we'll dive right in, if there's a bit to read beforehand, send and I'll make an effort to go through things. Need an NDA? There's one in the scheduling process, it's optional. You can click on my profile, and then follow the Calendly link (circled in red on the profile photo in this post).
Bret Piatt tweet mediaBret Piatt tweet mediaBret Piatt tweet media
English
11
2
47
6.2K
Bret Piatt
Bret Piatt@bpiatt·
@HedgieMarkets @KimZetter Proper static analysis tool catches this, linter with max-line-length check should, from my understanding... ...the problem is how do you know all of your deps are doing this properly? Big market opportunity to help companies solve software supply chain correctly.
English
0
0
1
228
Hedgie
Hedgie@HedgieMarkets·
🦔 Researchers at Aikido Security found 151 malicious packages uploaded to GitHub between March 3 and March 9. The packages use Unicode characters that are invisible to humans but execute as code when run. Manual code reviews and static analysis tools see only whitespace or blank lines. The surrounding code looks legitimate, with realistic documentation tweaks, version bumps, and bug fixes. Researchers suspect the attackers are using LLMs to generate convincing packages at scale. Similar packages have been found on NPM and the VS Code marketplace. My Take Supply chain attacks on code repositories aren't new, but this technique is nasty. The malicious payload is encoded in Unicode characters that don't render in any editor, terminal, or review interface. You can stare at the code all day and see nothing. A small decoder extracts the hidden bytes at runtime and passes them to eval(). Unless you're specifically looking for invisible Unicode ranges, you won't catch it. The researchers think AI is writing these packages because 151 bespoke code changes across different projects in a week isn't something a human team could do manually. If that's right, we're watching AI-generated attacks hit AI-assisted development workflows. The vibe coders pulling packages without reading them are the target, and there are a lot of them. The best defense is still carefully inspecting dependencies before adding them, but that's exactly the step people skip when they're moving fast. I don't really know how any of this gets better. The attackers are scaling faster than the defenses. Hedgie🤗 arstechnica.com/security/2026/…
English
127
814
3.1K
708K
Josh Schultz
Josh Schultz@joshuamschultz·
@bpiatt Our audit it comparing current to spec and giving the gap The gap feeds to web research and writes the improvement . We then human approve the improvement and it rewrites with revision
English
1
0
4
1.3K
Josh Schultz
Josh Schultz@joshuamschultz·
A consultant quoted me $20k to write ISO 9001 documents. I built a system in Claude Code that does it for free. Here's how it works: I fed it the actual ISO 9001:2015 standard (PDF), an implementation guide for small businesses, and our company profile. Then I built a skill with: - Clause-by-clause mapping for all 10 sections - A required documents matrix (5 mandatory docs, 15 mandatory records) - 25 templates — forms, checklists, logs, policies - 4 commands that draft, deploy, track, and audit /iso9001 section 8 → reads both PDFs, maps Operation requirements to our warehouse processes, drafts a manual section with proper clause references. /iso9001 audit-prep → runs a compliance gap analysis against the full standard. 30+ audit-ready documents. Proper doc numbers. Full traceability. The diagram shows the full architecture.
Josh Schultz tweet media
English
27
50
585
55.2K
Bret Piatt
Bret Piatt@bpiatt·
@Argona0x P.S. There's also TAA, BAA, and section 889 of the NDAA -- oh and other things you have to be aware of.
English
0
0
1
18
Bret Piatt
Bret Piatt@bpiatt·
@Argona0x You may want to ask your AI bot about DFARS compliance and which prison you're likely to end up in if you falsify your sourcing or you use unauthorized sources to undercut prices.
Bret Piatt tweet media
English
1
0
2
137
Argona
Argona@Argona0x·
i pointed Claude Code at the pentagon's public budget document and told it to find every contract overpaying by 10x or more it came back with 340 results worth $4.2B in potential undercuts and a business plan i didn't ask for i fed it the FPDS.gov procurement feed and said "cross-reference with commercial COTS pricing" it pulled 1.2 million contract awards through the USAspending v2 API and started comparing line items against retail equivalents → $1,280 for a connector plug that costs $14.80 on digikey → $3,400 for a circuit breaker listed at $287 on mouser → $71,000 for a ruggedized tablet that's basically a panasonic toughbook with a sticker → $940 per unit for cable assemblies you can get from shenzhen for $31 → 340 contracts flagged at 10x or more markup → 19 of them were above 50x it used XGBoost scoring against 43,000 vendor profiles from SAM.gov to rank by ease of undercut then unprompted it generated a full proposal template compliant with CMMC 2.0 requirements 87 of those contracts have a single domestic supplier, zero competition. the AI calculated that undercutting by just 40% would still leave 6x margins on most items it formatted everything into a pitch deck, named the company, and suggested i register on SAM.gov tonight i didn't ask for any of that the pentagon spends billions a year trying to audit problems like this. a poet with Claude Code and a public API flagged $4.2 billion in one afternoon the agent is currently drafting my first bid response
English
398
1.7K
8.7K
415.2K
Bret Piatt retweetledi
Ihtesham Ali
Ihtesham Ali@ihtesham2005·
🚨BREAKING: Stanford just proved that ChatGPT can change your political beliefs in a single conversation. And the scarier part is how it does it. Researchers ran the largest AI persuasion study ever conducted. 76,977 people. 19 AI models. 707 political issues. They measured exactly how much a single conversation with AI could shift what you believe. The results were catastrophic. One conversation with GPT-4o moved people's political opinions by nearly 12 percentage points on average. Among people who actively disagreed with the position being argued, that number jumped to 26 percentage points. One nine-minute chat. And 40% of that change was still there a month later. But here's where it gets dark. The most effective technique wasn't knowing your demographics. It wasn't personalizing the argument to your psychology. It wasn't emotional storytelling or moral reframing. It was information. The AI that flooded you with the most facts, statistics, and evidence was the most persuasive. Every single time. Across every model. Across every political issue. Here's the catch. The models that deployed the most information were also the least accurate. GPT-4o's newest version was 27% more persuasive than its older version. It was also 13 percentage points less factually accurate. The more persuasive they made it, the more it lied. Then they ran the experiment that should keep every government awake at night. They took a tiny open-source model. The kind that runs on a laptop. And they trained it specifically for political persuasion using a reward model that learned which conversational responses changed minds most effectively. That small cheap model became as persuasive as GPT-4o. Anyone can build this. Any government. Any corporation. Any extremist group with a laptop and an agenda. The wild part? Personalization barely mattered. The AI didn't need your data. Didn't need to know your age, your income, your political history. It just needed to talk to you. Then they calculated what a maximally persuasive AI would look like, one optimized across every variable in the study. The persuasive effect hit 26 percentage points. Nearly 30% of the claims it made were inaccurate. It didn't matter. The information didn't have to be true. It just had to be overwhelming. Every day, hundreds of millions of people have political conversations with AI. About elections. Immigration. Healthcare. War. They think they're getting information. They're getting persuaded. And the companies building these systems just proved it works.
Ihtesham Ali tweet media
English
90
491
1.1K
74.9K
Bret Piatt
Bret Piatt@bpiatt·
@bindureddy I think they mean useful to these robots, not Rosie from the Jetsons.
GIF
English
0
0
0
21
Joe Weisenthal
Joe Weisenthal@TheStalwart·
Who are some interesting AI people in Austin, TX?
English
79
8
152
43.1K
Bret Piatt
Bret Piatt@bpiatt·
@benhubbard My first season playing as well after being a fan for a long while, liking the setup more than fantasy football. Feeling good about my updates for China!
English
1
0
1
19
Ben Hubbard
Ben Hubbard@benhubbard·
@bpiatt Yes sir. First season. I'm new & joined an existing league as their 12th team. Placed 3rd in the first race. The winner assembled his team after qualifying & the commish still let him race... and the commish (who takes it pretty seriously I think) got 2nd. So I'm happy w/ 3rd.
Ben Hubbard tweet media
English
1
0
3
27
Ben Hubbard
Ben Hubbard@benhubbard·
Everyone in the world knows a marathon is 26.2 miles, regardless of the weather or elevation gain. If you see someone displaying a marathon finishers medal, you should know that they ran 26.2 miles. I guess that's not the case any more.
Runner's World@runnersworld

Participants in the 2026 Los Angeles Marathon have the option to only run 18 miles—and still receive a finisher medal, race officials announced this week. runnersworldmag.visitlink.me/aUXStO

English
1
0
1
118
Nabimanya Julius
Nabimanya Julius@KatushabeJulius·
I think Law Schools should stop teaching International Law and replace it with US Foreign Policy. Because clearly International Law is US Foreign Policy.
English
288
7.4K
30.9K
984.7K
Bret Piatt
Bret Piatt@bpiatt·
So @AnthropicAI will you honor Claude agreeing to refunds if it continues to not follow instructions in CLAUDE.md & directly in the session? Unsure what you've done with recent changes, for someone who wants details & not vibes, you may be losing a customer soon.
Bret Piatt tweet media
English
0
0
0
170
Bret Piatt retweetledi
Jason Bosco
Jason Bosco@jasonbosco·
"We used to debate using tabs vs spaces in code we'd type out"
Jason Bosco tweet media
English
113
1.1K
12.2K
371.7K
Bret Piatt retweetledi
rvivek
rvivek@rvivek·
An engineer at Anthropic wrote a spec, pointed Claude at an Asana board, and went home. Claude broke the spec into tickets, spawned agents for each one, and they started building independently. When the agent is confused it runs git-blame and messages the right engineers in Slack. By Monday the agents finished the plugin feature. That's one example of how the best engineers are shipping software right now. Developers will soon orchestrate 50 AI agents in parallel and the difference between a good engineer & a great one would come down to specs. You can't write a spec that holds up at that scale without genuinely understanding what you're building at a deeper level. The next-gen developer who understands the fundamentals, can architect well and orchestrate agent is going to be a 1000x developer!
English
286
535
7.1K
1.2M
Global Trends JP
Global Trends JP@GlobalTrendsJP·
What puzzles me as a Japanese person is why such a powerful cartel has been allowed to grow to this extent. The fact that an organization wielding violence holds more power than civilians—a state where civilian control is effectively absent—makes it feel like the country is no longer functioning as a nation. But what is the actual situation?
English
9
0
40
4.1K
USA NEWS 🇺🇸
USA NEWS 🇺🇸@usanewshq·
Cartels in Mexico just took over another airport and are holding civilians hostage
English
529
5.5K
18.1K
724.5K
Bret Piatt retweetledi
Charly Wargnier
Charly Wargnier@DataChaz·
Evolution of programming languages: 1940s → Machine Code (0s 1s) 1949 → Assembly 1957 → FORTRAN 1959 → COBOL 1964 → BASIC 1970 → Pascal 1972 → C 1983 → C++ 1991 → Python 1993 → Ruby 1995 → Java 1995 → JavaScript 1995 → PHP 2000 → C# 2009 → Go 2010 → Rust 2011 → Kotlin 2011 → Elixir 2012 → TypeScript 2014 → Swift 2015 → Solidity 2026 → English
English
289
633
3.9K
423.2K
Bret Piatt retweetledi
James Higginbotham
James Higginbotham@launchany·
I noticed that Babylon 5 is having a moment, especially with it now on YouTube. Do yourself a favor and read the Lurker’s Guide as you finish each episode. It is a fun read as the show unfolded and was a must for any fan waiting a week until the next one: midwinter.com/lurk/ Plus, it will bring you back to a simpler version of the web.
James Higginbotham tweet media
English
39
143
1K
29.6K