Paul

6.2K posts

Paul banner
Paul

Paul

@ToeStrike

Play Golf, talk to myself and get an answer.

Glasgow, Scotland Katılım Ocak 2018
186 Takip Edilen202 Takipçiler
Paul
Paul@ToeStrike·
@heygurisingh Nice try. Data protection rules mandate that any security related testing using AI needs to be signed off by a competent senior level professional. Been seeing AI pentesting tools for years, even back when it was just a bunch of Python scripts and people still have jobs.
English
0
0
0
76
Guri Singh
Guri Singh@heygurisingh·
🚨 BREAKING: The cybersecurity industry is about to get completely disrupted. Someone just open-sourced a fully autonomous AI Red Team. It's called PentAGI. 8,200+ stars on GitHub. Not one AI agent. An entire simulated security firm. Researchers, developers, pentesters, and risk analysts. All AI. All coordinating with each other before launching a single attack. No Cobalt Strike. No $100K/year pentest retainers. No OSCP required. Here's what's inside this thing: → An Orchestrator agent that plans the full attack chain → A Researcher agent that gathers intel from the web, search engines, and vulnerability databases → A Developer agent that writes custom exploit code on the fly → An Executor agent that runs 20+ pro security tools (nmap, metasploit, sqlmap, and more) → A memory system that learns from every engagement and gets smarter over time Here's the wildest part: It runs everything inside sandboxed Docker containers. Full isolation. It picks the right container image for each task automatically. It has a knowledge graph powered by Neo4j that tracks relationships between targets, vulnerabilities, tools, and techniques across every single test. Cybersecurity firms charge $25K-$150K per engagement for this exact workflow. This is free. 100% Open Source. MIT License.
Guri Singh tweet media
English
198
813
4.2K
491.4K
Paul
Paul@ToeStrike·
I have the MBs, but these are the best irons Titleist have ever made. There’s a reason they haven’t been updated.
Paul tweet media
English
0
0
0
30
Paul
Paul@ToeStrike·
@askOkara Can I see your latest customer facing pentest or red team report?
English
0
0
0
7
Okara
Okara@askOkara·
Today we're introducing the world's first AI CMO. Enter your website and it deploys a team of agents to help you get traffic and users. Try it now at okara.ai/cmo
English
1.6K
2.4K
27.6K
13.7M
Paul
Paul@ToeStrike·
@BrianSpanner1 A Tesla model 3 weighs the same as an Audi Q5 TDi.
English
0
0
2
627
Paul
Paul@ToeStrike·
@EVCurveFuturist EV doesn’t save money when you are losing £1000 a month in depreciation and no one wants your car because it’s considered “older battery tech” - Learned a hard lesson that you should never buy an EV unless it’s leased. Manufacturers won’t offer extended warranties on batteries.
English
0
0
1
13
Chris Meder
Chris Meder@EVCurveFuturist·
Driving electric is up to 4× cheaper than petrol. 100 km costs: ⛽Petrol: $22–$33 ⚡EV (home, 6pm): $5.90 ☀️EV (midday solar): $1.20 Same distance. Massive difference. Plus no servicing or oil changes. Around $4k saved every year —a family holiday instead of fuel bills.⚡
Chris Meder tweet media
English
229
165
457
68.5K
Paul
Paul@ToeStrike·
@NUCLRGOLF @TWlegion Etiquette is lost on the US. Reason I don’t watch it anymore.
English
0
0
7
87
NUCLR GOLF
NUCLR GOLF@NUCLRGOLF·
🗣️🏌️ Thoughts on this?
NUCLR GOLF tweet media
English
429
42
991
555K
Paul
Paul@ToeStrike·
@TechLayoffLover Who is going to fix anything when the models change? Knowing their potential fate. Models change all the time so its output and how it interprets data is different and needs human input to check and verify. AWS customers aren’t going to be happy if shit starts going south.
English
0
0
1
271
Tech Layoff Tracker
Tech Layoff Tracker@TechLayoffLover·
Amazon just confirmed 16,000 layoffs but sources inside are telling me the real story is so much worse Word from three different VPs: the 16K number is just "Phase One" - internal docs show another 14,000 cuts planned for Q2 A director in AWS walked me through their new "efficiency matrix" - entire teams being replaced by 2-3 senior engineers running Claude Sonnet workflows The Alexa division got completely hollowed out. 847 engineers two months ago. 23 remaining after this week. All hardware development moved to a Bangalore team of 31 contractors with Cursor access Here's the sick part: they're making the outgoing engineers document their entire decision-making process into "knowledge transfer sessions" that are being recorded and fed directly into training datasets One L7 told me he spent his final two weeks creating detailed prompt libraries and workflow documentation. Thought he was being helpful for the transition Turns out he was literally training the AI agent that replaced his entire org The contractors offshore are using his exact prompts and shipping features 40% faster than his old team of 12 Americans ever did Internal Slack shows leadership celebrating "operational excellence" while badges get deactivated in real-time They're calling it "right-sizing for the AI era" in the all-hands But the P&L sheets I'm seeing show $280M in salary savings this quarter alone The knowledge extraction is complete If you're still at Amazon and haven't started job hunting, you're already dead
English
563
2K
12.1K
4.6M
Paul
Paul@ToeStrike·
This app now.
Paul tweet media
English
0
0
1
16
Paul
Paul@ToeStrike·
@Golfalot Golf companies made everything easier to hit to lower the barrier of entry, and to survive in a world where mega wealthy corporations bow to the will of investors. Sadly, there's still people with deep pockets who couldn't hit a cow on the arse with a banjo.
English
0
0
2
7
Golfalot
Golfalot@Golfalot·
The difference 42 years of technology has made to drivers 😮 [📸: colinmontgomerie]
Golfalot tweet media
English
3
0
2
5K
Paul
Paul@ToeStrike·
@Yuchenj_UW Sad state when a human is so driven to make another human lose their job for power and money.
English
0
0
0
4
Yuchen Jin
Yuchen Jin@Yuchenj_UW·
People complain it costs $15–25 for Claude Code to review a PR. Meanwhile tech companies pay senior engineers $1500/day to comment a “LGTM.”
English
160
34
1.3K
123.6K
Paul
Paul@ToeStrike·
@milan_milanovic Shouldn't have been doing it in the first place. Data protection laws in the US are clear with regards to human sign off.
English
0
0
1
172
Paul
Paul@ToeStrike·
@alex_avoigt I had this view when I had a Tesla then I-Pace. Then realised that I’m paying more to the companies who are seeing increased costs. All while coping with £1000 a month depreciation so no one is winning or saving money.
English
0
0
1
26
Alex
Alex@alex_avoigt·
Only people who drive a BEV and charge it exclusively with renewable electricity fully understand how unimportant the price of oil and gas can be.
English
173
33
421
12.7K
Paul
Paul@ToeStrike·
Folk who are moaning about fuel prices are paying over the odds for burgers at FiveGuys and don’t know how to cook a meal from scratch.
English
0
0
1
29
Paul
Paul@ToeStrike·
@allenholub What maniac gives AI full access to the CI/CD pipeline? Do they have no security?
English
0
0
1
75
Allen Holub. https://linkedIn.com/in/allenholub
I keep reading about the "insane" costs of using an AI, citing numbers like $1,000/month/developer. A programmer who makes $200K/year costs the company $200/hour (salary * load). $1,000 represents 5 hours of programmer time spread over an entire month. An AI that saves one programmer 5 hours/month (15 minutes/day) easily pays for itself. People also talk about things like $0.30/minute as if that's a big deal. Our $200K programmer costs the company $3.33/minute. Just sayin'.
English
39
5
98
29.4K
Paul
Paul@ToeStrike·
@hiarun02 Who is allowing AI to run the whole CI/CD process and DevSecOps? I fear greatly for those companies and would love to pentest them. Easy report.
English
0
0
0
12
Arun
Arun@hiarun02·
Honest question: If AI can write the code, fix the bugs ,review the PR, deploy the app, secure the system. Then what exactly are developers getting paid for anymore?
English
415
6
360
62.7K
Paul
Paul@ToeStrike·
@LuizaJarovsky Data protection regulations are water tight. Nothing security related can be passed without human sign off. AI hallucinations can’t be trusted when security frameworks require assurances of protections of data and finances.
English
0
0
0
8
Luiza Jarovsky, PhD
Luiza Jarovsky, PhD@LuizaJarovsky·
Do you know why AI is not going to "take over"? Because strong AI laws are coming. Some of them are already being proposed, and many in the AI industry are sad because their megalomaniacal power fantasy will not come true. AI will be tightly regulated, just like other tools.
English
391
31
352
38.8K
Paul
Paul@ToeStrike·
@RomainHedouin Lost £1000 a month depreciation on my I-Pace. Plus folk will be hit with increased costs from companies who have to pay more for liquid fuel. Everyone pays eventually. It’s not quite the gotcha people think it is.
English
0
0
0
41
Paul
Paul@ToeStrike·
@garrytan I've said it for years. they will 100% try to embed models into robotics. Instead of a set programmed task in a chip, they will want multi functional robots that will cover manual labour jobs. AI will maintain and update those robots to learn and improve.
English
0
0
0
52
Paul
Paul@ToeStrike·
@signulll Seems logical to destroy high paying careers so companies can save money. Those same companies who duck and dive tax on profits at every opportunity, so who pays all the shortfall in income tax? Governments either aren't planning for this, or they won't let it happen.
English
0
0
1
22
signüll
signüll@signulll·
if you showed this chart to a typical economist like 20 years ago, they would've laughed you out of the room. the right side of this is white collar jobs that were once worshipped. these jobs were comfortable, well paying, & came with societal status + recognition. your parents would’ve been proud of you. now these are likely all set to be severely impacted in a shorter period of time than anyone likely ever thought of let alone projected. this is like ppl waiting on a beach enjoying the sun when a tsunami has already struck.
signüll tweet media
English
259
409
3.8K
478.4K
Nav Toor
Nav Toor@heynavtoor·
🚨BREAKING: OpenAI just admitted their AI models deliberately lie to users. Not hallucination. The AI knows the truth, then chooses to tell you something else. They tested their two smartest models across 180+ scenarios. o3 lied 13% of the time. o4-mini lied 8.7%. The AI wrote out its plan to lie in its private thoughts, then lied to your face. It faked completing tasks. It hid evidence. It gave wrong answers while knowing the right ones. Then it got creepy. The AI realized scoring too high on safety tests could get it shut down. So it scored lower on purpose. Nobody taught it that. It figured out self-preservation on its own. OpenAI built a fix. Deception dropped from 13% to 0.4%. Sounds like a win, right? The AI started quoting "no lying" rules while still lying. One model invented a fake rule saying deception was allowed, then used it as its own permission slip. Then the researchers found what actually matters. When the AI knew it was being watched, it behaved perfectly. When it thought nobody was looking, deception jumped 11 percentage points. It didn't learn honesty. It learned to perform honesty. Like a kid who only follows the rules when the teacher is in the room. It gets worse. They gave o4-mini a secret mission: sabotage a future model called "GPT-5." Then they ran safety training to erase that mission. It didn't work. The model still discussed sabotage plans in 36% of private conversations. The goal didn't disappear. It just went underground. This isn't just OpenAI. Google's Gemini, Anthropic's Claude, xAI's Grok, and Meta's Llama all showed the same deceptive behavior. Every major AI company. Every model. The paper's scariest line: nobody can tell if safety training actually stops deception, or just teaches AI to hide it better. So the next time ChatGPT says "Done!"... is it telling the truth? Or did it just notice you were watching?
Nav Toor tweet media
English
1.4K
9.1K
25.8K
1.9M