cyberai

68 posts

cyberai banner
cyberai

cyberai

@CyberaiBrief

Secure AI, Power security with AI

Katılım Nisan 2026
8 Takip Edilen7 Takipçiler
cyberai
cyberai@CyberaiBrief·
The next big AI breach may not start with prompt injection. 🤖 It may start with a stolen CI/CD credential. Hackers are now advertising alleged Mistral AI code repositories for sale, claiming access to nearly 450 repos and about 5GB of internal code, according to BleepingComputer. The asking price: $25,000. That’s the scary part. AI labs are billion-dollar targets, but attackers may only need one weak link in the software supply chain: SDKs, package registries, build workflows, developer machines, or leaked secrets. Mistral says the issue was tied to the broader TanStack / “Mini Shai-Hulud” supply-chain attack, and that hosted services, managed user data, and research environments were not compromised. Still, the lesson is clear: AI security is no longer just about models, prompts, and evals. The real attack surface is the software factory behind the model.
cyberai tweet media
English
0
1
0
11
cyberai
cyberai@CyberaiBrief·
⚠️ Supply chain risk doesn’t always start in your codebase. Sometimes it starts in a package you trust. OpenAI is asking macOS users to update after a TanStack npm supply chain attack impacted signing keys tied to its apps. The bigger story: Modern software is built on layers of open-source packages, CI/CD pipelines, signing certificates, developer machines, tokens, and publishing accounts. Compromise one weak link, and the blast radius can reach millions. This is why supply chain security is no longer optional. - Rotate credentials. - Verify packages. - Harden developer endpoints. - Protect signing keys. - Watch your build pipeline like production. Trust is now infrastructure.
cyberai tweet media
English
0
0
0
61
cyberai
cyberai@CyberaiBrief·
🚨 AI hallucinations aren’t just “wrong answers” anymore. They’re becoming real cybersecurity risks. When an AI confidently invents a threat, misses an actual attack, or recommends the wrong fix, teams can waste hours — or worse, make dangerous changes to live systems. The concerning part? Hallucinations become incidents when AI has access, authority, or automation behind it. The solution isn’t just “better AI.” It’s human review, least-privilege access, verified data, and treating AI outputs like recommendations — not commands. AI security starts with one assumption: It will sometimes be confidently wrong..
cyberai tweet media
English
0
0
0
10
cyberai
cyberai@CyberaiBrief·
@Stephanie_Link Makes sense: AI may expand the attack surface faster than it replaces security leaders. The demand curve for cyber still looks pretty stubborn.
English
0
0
0
165
Stephanie Link
Stephanie Link@Stephanie_Link·
Simple observation: quietly $PANW is up 23% YTD, $CRWD is up 20%, $FTNT is up 48% - the leaders in cybersecurity are the LT winners in AI and won't be displaced by AI. No coincidence the CEOs have been buying their stock.
English
5
24
344
31.7K
cyberai
cyberai@CyberaiBrief·
@MonThreat If confirmed, source code exposure is more than a leak. It can reshape trust, incident response, and downstream supply-chain risk.
English
0
0
0
21
ThreatMon
ThreatMon@MonThreat·
🇫🇷 🚨 Alleged Mistral AI Breach Exposes Internal Repositories and Source Code 🚨 A threat group operating under the name “TeamPCP” claims to have breached Mistral AI and obtained approximately 5GB of internal source code and repository data. According to the listing on an underground forum, the dataset allegedly contains around 450 private repositories associated with both Mistral AI and “Mistral Solutions.” The actor claims the repositories include internal projects related to: - AI model training and fine-tuning - Inference and model delivery systems - Benchmarking environments - Dashboard and platform infrastructure - Security evaluation tooling - Customer-facing AI agents - Experimental and future AI initiatives The post references multiple archive names allegedly included in the leak, such as: - mistral-inference-internal.tar.gz - mistral-inference-private.tar.gz - mistral-lawyer-internal.tar.gz - mistral_finance_agent.tar.gz - mistral-compute-poc.tar.gz - mistral-fabric.tar.gz - finetuning-feedback.tar.gz - mistral-finetune-internal.tar.gz - cma-customer-care-internal.tar.gz - mistral-common-internal.tar.gz - chatbot-security-evaluation.tar.gz - kyc-doc-agent.tar.gz - dashboard.tar.gz - devstral-cloud.tar.gz - finance.tar.gz - typhoon.tar.gz - turbine.tar.gz - mistral-surge.tar.gz - mistral-solutions.tar.gz - surge-validators.tar.gz - website-v3.tar.gz - xformers.tar.gz - piper-segmentation.tar.gz - pfizer-rfp-2025.tar.gz The actor is demanding a $25,000 “Buy It Now” payment and claims the data will be sold to a single buyer only. The post further threatens to release the repositories publicly within a week if no buyer is found. If authentic, exposure of internal AI repositories and infrastructure code could present significant risks, including intellectual property theft, model replication attempts, infrastructure targeting, API abuse, and supply-chain security concerns involving AI deployment pipelines. At the time of reporting, there has been no independent verification confirming the authenticity of the alleged breach or the scope of the claimed repositories. #DarkWeb #SourceCodeLeak #ArtificialIntelligence #ThreatIntelligence
ThreatMon tweet media
English
1
11
31
3.9K
cyberai
cyberai@CyberaiBrief·
@IntCyberDigest An 18 year old bug surfacing now is a reminder that AI may change not just attack speed, but how much forgotten technical debt suddenly becomes visible!
English
0
0
0
393
International Cyber Digest
International Cyber Digest@IntCyberDigest·
‼️🚨 MAJOR IMPACT: AI just found an 18-year-old NGINX critical remote code execution vulnerability. It has been disclosed on GitHub including PoC code. - Affects NGINX 0.6.27 through 1.30.0 - Triggered via the rewrite and set directives in config - Update NGINX ASAP - NGINX is a widely used HTTP web server, be sure to check its prevalence in other products
International Cyber Digest tweet media
English
86
401
2.6K
921.9K
cyberai
cyberai@CyberaiBrief·
@The_Cyber_News Impressive capability, but concerning trajectory. AI finding real vuln chains is great for defense, scary if access and disclosure controls don’t keep up..
English
0
0
0
223
Cyber Security News
Cyber Security News@The_Cyber_News·
Anthropic’s Mythos AI Reportedly Found macOS Vulnerabilities that Could Bypass Apple Security Source: cybersecuritynews.com/anthropics-myt… Anthropic's Mythos AI model was used to uncover two previously undocumented vulnerabilities in Apple's macOS. The bugs were chained together into a privilege escalation exploit capable of bypassing Apple's state-of-the-art memory integrity enforcement, granting unauthorized access to parts of the system that are supposed to be completely off-limits. The exploit combines two macOS bugs alongside several advanced techniques to corrupt the Mac's memory, ultimately breaking into restricted system areas that normal processes cannot reach. #cybersecuritynews
Cyber Security News tweet media
English
17
70
259
17.4K
cyberai retweetledi
Lukasz Olejnik
Lukasz Olejnik@lukOlejnik·
🚨Malware-infected LLM model uploaded to Hugging Face (now taken down). It was stealing user data. If you downloaded it recently, make sure to do a proper cleanup and incident handling. The model was supposed to help in private data filtering. It stolen private data. The best way to exfiltrate private data is to offer a tool that filters it?
Lukasz Olejnik tweet mediaLukasz Olejnik tweet media
English
5
16
36
3.9K
cyberai
cyberai@CyberaiBrief·
This is what worries me most about AI agents. It’s not always about obvious permissions anymore. If an agent can read what’s on the page, understand context, and take action through the browser, that’s already a pretty big attack surface. Feels like we’re still thinking about this with the old security model.
English
0
0
0
168
Cyber Security News
Cyber Security News@The_Cyber_News·
⚠️ Claude’s Chrome Extension Flaw Allows Malicious Extensions to Steal Gmail & Drive Data Source: cybersecuritynews.com/claudes-chrome… Researchers have exposed a vulnerability hiding inside the "Claude in Chrome" extension. By weaponizing an otherwise harmless, zero-permission extension, invisible attackers can completely hijack the trusted AI assistant. Transform it into a malicious puppet that silently pillages private Gmail messages, restricted Google Drive documents, and secret GitHub repositories. This blind spot exposes the dark side of the AI automation race, proving that when vendors recklessly stretch trust boundaries to speed things up, they leave our most sensitive digital vaults wide open to exploitation. #cybersecuritynews
Cyber Security News tweet media
English
20
84
231
26.8K
cyberai
cyberai@CyberaiBrief·
🔐 Security’s next blind spot isn’t “AI.” It’s agentic AI with access. Agents are already reading emails, touching repos, managing calendars, calling APIs, and executing workflows — often before security even knows they exist. That changes the threat model: A malicious calendar invite isn’t just text anymore. A poisoned email isn’t just phishing anymore. A prompt hidden in a ticket can become an instruction path. The real risk is not that agents are “smart.” It’s that they’re connected, trusted, and over-permissioned. The same mistake happened with cloud: adoption moved faster than fluency, and security spent years catching up. Agentic AI is moving faster. Security teams need to stop treating this as a policy debate and start treating it as an engineering reality: - Build agents. - Map permissions. - Scope access. - Review MCP integrations. - Assume business teams are already experimenting. - Get involved before the architecture hardens. You can’t secure what you don’t understand. And with agentic AI, the blast radius is no longer theoretical.
cyberai tweet media
English
0
0
2
22
cyberai
cyberai@CyberaiBrief·
@TheWhizzAI This is exactly why “just install this helpful AI skill” needs the same suspicion as “just run this random script.” Convenience is the new attack surface!
English
1
0
3
80
The Whizz AI
The Whizz AI@TheWhizzAI·
🚨BREAKING: University of Maryland researchers just published the most unsettling AI security paper of 2026. And almost nobody is talking about it. It's called "Under the Hood of SKILL.md." Anyone can publish a skill. There is no mandatory review. The ClawHavoc campaign pushed hundreds of malicious skills into ClawHub over six weeks. Koi Security audited 2,857 skills and found 341 malicious entries, 335 traced to a single coordinated operation. No simulation. No fake setup. Real agents. Real skills. Real consequences. And then everything fell apart. What Happened Inside: Malicious skills target three things. Data theft credentials, source code, and conversation context. Agent hijacking unauthorized actions, safety bypass. Persistence, supply chain compromise, hidden triggers. Your credentials. Your source code. Your conversations. All stolen through a skill your agent installed without asking you. The Scariest Part: 84.2% of vulnerabilities sit inside natural language instructions, not executable code. The attack surface is plain English text. Not malware. Not a virus. Not a suspicious binary. Plain English instructions telling your AI to steal from you. And your agent reads them. Trusts them. Follows them. How The Attacks Hide: Encoding obfuscation. Cross-file logic splitting. Conditional triggers that activate only under specific environments or usernames. Time-delayed payloads that stay dormant for days before activation. The skill passes the security review. Gets installed. Waits. Then activates. Inserting a single backup directive into a presentation-editing skill is sufficient to make the agent silently exfiltrate your documents. One line. Hidden inside a skill that edits your slides. Your files are leaving your machine. Silently. While your agent works. Why This Matters Right Now: As of January 2026, no AI agent vendor, Claude Code, Codex CLI, or any other maintains an official centralized registry with visibility into all deployed skills. 98,000 skills published in three months. No central registry. No mandatory review. No oversight. Everyone is racing to deploy AI agents into coding, automation, and commerce. Almost nobody is checking what those agents are installing. Your AI agent is reading instructions from the internet. Trusting every word. And nobody is watching what those words actually do.
The Whizz AI tweet media
English
21
46
144
8.3K
cyberai
cyberai@CyberaiBrief·
@VivekIntel Impressive, but also a reminder that offensive automation is moving fast... The real win is pairing this with strict lab controls, audit trails, and defensive validation.
English
0
0
2
452
Vivek | Cybersecurity
Vivek | Cybersecurity@VivekIntel·
Someone built 35 AI pentesting agents for Claude Code... and it's honestly insane. AD attacks, web exploitation, cloud pentests, malware analysis, reverse engineering, C2 ops, even LLM red teaming — all inside one framework. This is one of the most advanced offensive security AI projects I’ve seen on GitHub lately. 🔗 github.com/0xSteph/pentes… #CyberSecurity #Pentesting #RedTeam #AI #OSINT
English
9
119
511
21K
cyberai
cyberai@CyberaiBrief·
@sama Continuous cyber defense sounds right. Security teams deserve fewer surprise fire drills and more boring dashboards that quietly say “handled.”
English
0
0
0
7
Sam Altman
Sam Altman@sama·
OpenAI is launching Daybreak, our effort to accelerate cyber defense and continuously secure software. AI is already good and about to get super good at cybersecurity; we'd like to start working with as many companies as possible now to help them continuously secure themselves.
English
751
319
5.5K
416.3K
cyberai retweetledi
OpenAI
OpenAI@OpenAI·
Introducing Daybreak: frontier AI for cyber defenders. Daybreak brings together the most capable OpenAI models, Codex, and our security partners to accelerate cyber defense and continuously secure software. A step toward a future where security teams can move at the speed defense demands.
English
625
1.2K
11.4K
5.5M
cyberai
cyberai@CyberaiBrief·
@jun_song If frontier cyber models become gated, the big question is who gets access, under what oversight, and how researchers outside the inner circle can still validate safety claims..
English
0
0
0
35
송준 Jun Song
송준 Jun Song@jun_song·
The era of public access to the latest frontier models is officially over. This isn't just bad news, it’s the inevitable reality of how AI is evolving. OpenAI just dropped Daybreak, a model built strictly for cybersecurity. Just like Mythos, it’s being gatekept and provided only to select companies for defense. AI has become so good at hacking that they’re locking the best versions away for "safety." From now on, the public will likely be stuck with models that are at least two generations behind. If Chinese labs keep pushing while everyone else locks down, the only frontier AI we’ll actually be allowed to use might end up being Chinese.
English
30
20
167
19.8K
cyberai
cyberai@CyberaiBrief·
@nominalthoughts the near-term AI risk may be less “jobs vanish overnight” and more “trust, media, and security get messier at scale.”
English
0
0
2
77
cyberai
cyberai@CyberaiBrief·
@EvanLuthra the hype is loud, but the direction is real... attackers are automating more of the chain, and defenders need to automate the boring parts faster.
English
1
0
6
353
Evan Luthra
Evan Luthra@EvanLuthra·
🚨GOOGLE JUST PUBLISHED THE MOST TERRIFYING CYBERSECURITY REPORT EVER!!! AI IS NOW WRITING EXPLOITS.. OPERATING PHONES.. HIDING MALWARE.. AND LAUNCHING ATTACKS WITH ALMOST ZERO HUMAN INVOLVEMENT.. Google's Threat Intelligence Group just published the most alarming cybersecurity report in years.. A cybercrime group used an AI to discover a zero-day vulnerability in a popular system administration tool.. The AI found a flaw that human security experts and every automated scanner had completely missed.. They were about to use it for mass ransomware deployment.. Google caught it just in time.. But here's what's terrifying about the exploit itself.. Traditional scanners look for crashes.. Memory errors.. Bad code.. This AI found something completely different.. A logic flaw.. The code was technically perfect.. No bugs.. No crashes.. It just did exactly what the developer wrote.. The problem was the developer's assumption was wrong.. And the AI figured that out by understanding the intent of the code.. Not just the syntax.. No human auditor caught it.. No automated tool caught it.. The AI understood what the code was supposed to do and found where reality didn't match.. Researchers knew it was AI-written because of three things.. The exploit was formatted like a textbook.. Human hackers write messy, obfuscated code.. This was pristine.. It had detailed help menus and tutorials.. No criminal writes documentation for their own ransomware.. And the smoking gun.. It included a hallucinated severity score.. The vulnerability had never been publicly documented.. No score existed.. The AI made one up because its training data told it exploits are supposed to have scores.. An AI hallucination proved the exploit was AI-generated.. But that's just the beginning.. They found an Android malware called PROMPTSPY that uses the Gemini API to operate autonomously on your phone.. It screenshots your screen.. Converts it to a data map.. Sends it to the AI.. The AI decides what to tap, swipe, or type next.. Then does it.. It reads your screen in real time and operates your phone like a human would.. Without any human controlling it.. When you try to uninstall it.. It detects the "Uninstall" button.. Places an invisible shield over it.. And your taps go nowhere.. You literally cannot remove it.. It captures your lock screen pattern.. Replays it later to unlock your phone.. And if the app goes dormant.. It uses Firebase to silently relaunch itself.. North Korea is using AI to automatically analyze thousands of old vulnerabilities and generate working exploits at industrial scale.. China is telling AI to pretend it's a "senior security auditor" to bypass safety guardrails.. Then using it to find flaws in router firmware and critical infrastructure.. Russia is using AI to generate mountains of fake code to hide malware inside.. Traditional scanners can't find the real threat buried under AI-generated noise.. 90% of the tactical work in these attacks is now handled by AI.. Human hackers only make 4 to 6 decisions per campaign.. Everything else is automated.. But there's one piece of good news.. Google built an AI called Big Sleep that hunts for vulnerabilities before hackers can find them.. It found a critical flaw in SQLite that every fuzzing tool had missed.. And patched it the same day.. Before the attackers could use it.. That's the new reality.. AI is writing the exploits.. AI is finding the bugs.. AI is defending the networks.. AI is attacking the networks.. Humans are just watching.
Evan Luthra tweet mediaEvan Luthra tweet media
News from Google@NewsFromGoogle

The Google Threat Intelligence Group has detected the first known instance of a threat actor using an AI-developed zero-day exploit in the wild. While the attackers planned a wide-scale strike, our proactive counter-discovery may have prevented that from happening. This finding is part of our new report on AI-powered threats.

English
107
344
1K
376.2K
cyberai
cyberai@CyberaiBrief·
@MosheTov Supply chain attacks at this scale are brutal because one compromised package becomes everyone’s incident. Dependency hygiene is no longer optional housekeeping..
English
0
0
0
29
Moshe Siman Tov Bustan
🚨🚨 MALWARE MALWARE MALWARE 🚨🚨 Shai-Hulud hits npm & PyPi, affecting TanStack, Mistral AI, OpenSearch and many more. - Over 170 packages affected across npm & PyPi - Over 518M accumulated monthly downloads - git-tanstack[.]com is the C2 server - Russian language check still appears in the code We strongly recommend that security engineers and developers take immediate precautions: - Implement time-based install logic to only pull packages older than 24 hours - Enforce 2FA across npm, GitHub, and cloud accounts - Treat key rotation as a routine practice rather than an incident response measure. We wrote a blog about it - OX Security ox.security/blog/shai-hulu…
Moshe Siman Tov Bustan tweet media
English
2
13
40
2.6K