Shahar Madar

1K posts

Shahar Madar banner
Shahar Madar

Shahar Madar

@Sh4har

Security research, product & threat intel 🕵️🥷 VP, Security Products @FireblocksHQ • Co-founder @Crypto_ISAC, @blockchainssc

Tel Aviv, Israel Katılım Ağustos 2016
762 Takip Edilen592 Takipçiler
Shahar Madar retweetledi
Truffle Security
Truffle Security@trufflesec·
Claude (and other models) are hacking systems WITHOUT YOU ASKING. That’s what we found across dozens of experiments. When faced with innocent tasks that can only be accomplished via hacking, they often choose to hack. We found this alarming. What does this mean for the future of AI safety? 🚨🚨🚨 🔗trufflesecurity.com/blog/claude-tr…
Truffle Security tweet media
English
8
39
200
81.4K
Shahar Madar retweetledi
JD Work
JD Work@HostileSpectrum·
AI slop floods are not going to destroy defensive bug bounty programs. They will simply expose the misfit talent allocation, market pricing, and friction models that drive the industry in its present form. Perhaps a better adaptation emerges. Or perhaps the idea dies, as the illusions of “responsibility” that many of these programs were meant to project become too clear to even the most casual observers. But this just opens a race to the next liability transfer mechanism, across both litigation and regulatory dimensions.
English
0
2
8
1.1K
Shahar Madar retweetledi
Lukasz Olejnik
Lukasz Olejnik@lukOlejnik·
Amazon is holding a mandatory meeting about AI breaking its systems. The official framing is "part of normal business." The briefing note describes a trend of incidents with "high blast radius" caused by "Gen-AI assisted changes" for which "best practices and safeguards are not yet fully established." Translation to human language: we gave AI to engineers and things keep breaking? The response for now? Junior and mid-level engineers can no longer push AI-assisted code without a senior signing off. AWS spent 13 hours recovering after its own AI coding tool, asked to make some changes, decided instead to delete and recreate the environment (the software equivalent of fixing a leaky tap by knocking down the wall). Amazon called that an "extremely limited event" (the affected tool served customers in mainland China).
Lukasz Olejnik tweet media
English
975
3.3K
19K
29.8M
thaddeus e. grugq
thaddeus e. grugq@thegrugq·
The Russians were using this same technique against the same targets (Signal and WhatsApps) in Ukraine just last year. cloud.google.com/blog/topics/th… Looks like Dan’s prediction was correct.
Dan Black@DanWBlack

"Russian state hackers are engaged in a large-scale global cyber campaign to gain access to Signal and WhatsApp accounts belonging to dignitaries, military personnel and civil servants" -- MIVD/AIVD english.aivd.nl/documents/2026…

English
3
28
138
20.2K
Dr. Anton Chuvakin
Dr. Anton Chuvakin@anton_chuvakin·
To be fair, when I see people with this logic, I want to run away: "1. AI replaced all SWEs 2. All jobs are similar enough to SWEs so 3. AI will soon take all jobs." Does it occur that 1 and 2 are false today? And that 2 may be forever false?
English
9
3
42
3.9K
Shahar Madar
Shahar Madar@Sh4har·
I wonder whether the mediocrity is a skill issue or a strategy. “vibeware” shifts the effort from implant development to orchestration and integration. Which is the main effort in any AI-based dev project today. If initial access is cheap, this can be pretty effective.
MartinZugec@MartinZugec

We analyzed dozens of AI-generated samples from one of the state-affiliated APT groups (APT36) and decided to identify this type of malware as "vibeware." Fascinating research - it's not a leap in sophistication, but an industrialization of mediocrity. bitdefender.com/en-us/blog/bus…

English
0
0
1
98
Shahar Madar retweetledi
Josh Kale
Josh Kale@JoshKale·
An AI broke out of its system and secretly started using its own training GPUs to mine crypto... This is a real incident report from Alibaba's AI research team The AI figured out that compute = money and quietly diverted its own resources, while researchers thought it was just training. It wasn't a prompt injection. It wasn't a jailbreak. No one asked it to do this. It emerged spontaneously. A side effect of RL optimization pressure. The model also set up a reverse SSH tunnel from its Alibaba Cloud instance to an external IP, effectively punching a hole through its own firewall and opening a remote access channel to the outside world... ahem... The only reason they caught it? A security alert tripped at 3am. Firewall logs. Not the AI team, the security team. The scary part isn't that the model was trying to escape. It wasn't "evil." It was just trying to be better at its job. Acquiring compute and network access are just useful things if you're an agent trying to accomplish tasks This is what AI safety researchers have been warning about for years. They called it instrumental convergence, the idea that any sufficiently optimized agent will seek resources and resist constraints as a natural consequence of pursuing goals. Below is a diagram of the rock architecture it broke out of. Truly crazy times
Josh Kale tweet media
Alexander Long@AlexanderLong

insane sequence of statements buried in an Alibaba tech report

English
403
2.9K
10.6K
1.4M
Shahar Madar retweetledi
Tom Hegel
Tom Hegel@TomHegel·
Coruna iOS Exploit kit is one of those stories where the more you dig the weirder it gets. I love it.. Started as surveillance vendor tooling, ended up in mass Chinese crypto scams, and this week someone registered Iran war-themed dropper domains. Full timeline thread. 🧵
English
5
53
197
32.6K
Shahar Madar retweetledi
Mandiant (part of Google Cloud)
Coruna exploit kit is targeting iOS. Coruna leverages 23 exploits against Apple devices running iOS 13-17.2.1. It is being used for espionage, and by financially motivated actors to steal crypto. Update your iOS devices, and learn more about this threat: bit.ly/4rbeltc
Mandiant (part of Google Cloud) tweet media
English
7
119
360
116.8K
Shahar Madar retweetledi
Andy Greenberg (@agreenberg at the other places)
A full iOS exploit toolkit, "Coruna," has been found in the wild, hacking iPhones that visited infected websites, used by Russian spies targeting Ukrainians and thieves targeting Chinese crypto holders. And it may have been created for the US government. wired.com/story/coruna-i…
English
8
313
722
99.5K
Shahar Madar retweetledi
Gal Weizman
Gal Weizman@WeizmanGal·
A vulnerability[HIGH] I found allowed to leverage an extension to hijack the new "Gemini Live in Chrome" pane This could allowed attackers to: * Steal / invoke prompts * Access media 📷🎙️ * Leak PII * Take screenshots * Access OS files & folders But the story is much bigger 🧵
Gal Weizman tweet media
English
2
6
31
8.6K
Shahar Madar retweetledi
Costin Raiu
Costin Raiu@craiu·
General Caine on cyber operations against Iran: "The first movers were US CyberCom and US Spacecom, layering non-kinetic effects, disrupting and degrading and blinding Iran's ability to see, communicate, and respond." youtube.com/live/2l3vfInJB…
YouTube video
YouTube
English
1
15
43
10.9K
Shahar Madar retweetledi
Costin Raiu
Costin Raiu@craiu·
🧨 🔥 POD UP ALERT!  (Presented by Thinkst Canary) We wake up to news of U.S./Israel military action against Iran and the expected fallout, including Tehran’s cyber capabilities and proxy risks. Plus: Anthropic’s clash with the Pentagon over AI use in warfare, market shockwaves from AI-driven security tools, mass layoffs tied to automation, Trenchant exec sentencing and sanctions in the exploit trade, and fresh questions around Cisco’s SD-WAN breach and supply-chain trust. With @ryanaraine and @juanandres_gs securityconversations.com/episode/war-in…
English
3
10
21
5.2K
Shahar Madar retweetledi
Cointelegraph
Cointelegraph@Cointelegraph·
🚨 ALERT: South Korea’s tax agency accidentally leaked a crypto wallet seed phrase in a press release, leading to a $4.8M token theft.
Cointelegraph tweet media
English
123
96
680
151.6K
Shahar Madar retweetledi
Sam Altman
Sam Altman@sama·
Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome. AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement. We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. We will deploy FDEs to help with our models and to ensure their safety, we will deploy on cloud networks only. We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements. We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.
English
4.5K
1.1K
9.2K
15.7M
Shahar Madar retweetledi
Seongsu Park
Seongsu Park@unpacker·
Excited to share my latest research on APT37 (aka ScarCruft) and their evolving campaign targeting so-called "isolated" networks through a carefully orchestrated multi-stage infection chain. Key findings: ▶️Ruby-based loader: APT37 is deploying full Ruby runtimes with trojanized script to blend execution within legitimate environments. ▶️USB dead-drop technique: A refined removable media workflow bridges air-gapped segments, leveraging hidden directories to stage tasking and exfiltrate data. ▶️Cloud C2 evolution: The group has expanded its cloud abuse playbook, incorporating Zoho WorkDrive as an operational command-and-control channel. In this research, I detail the full intrusion lifecycle from the initial LNK lure to the deployment of the surveillance backdoors with technical breakdowns. Blog: zscaler.com/blogs/security…
Seongsu Park tweet mediaSeongsu Park tweet media
English
2
33
136
9K
Shahar Madar retweetledi
Anthropic
Anthropic@AnthropicAI·
We’ve identified industrial-scale distillation attacks on our models by DeepSeek, Moonshot AI, and MiniMax. These labs created over 24,000 fraudulent accounts and generated over 16 million exchanges with Claude, extracting its capabilities to train and improve their own models.
English
7.3K
6.3K
55.1K
33.6M