Laksamana Chengho

3.8K posts

Laksamana Chengho

Laksamana Chengho

@siberchengho

Katılım Haziran 2023
579 Takip Edilen527 Takipçiler
Sabitlenmiş Tweet
Laksamana Chengho retweetledi
Brendan Dolan-Gavitt
This paper looks VERY interesting because they managed to get access to run the experiments on models without guardrails
Brendan Dolan-Gavitt tweet media
English
6
21
141
9.4K
Laksamana Chengho retweetledi
Socket
Socket@SocketSecurity·
🚨 Socket detected malicious activity in newly published versions of node-ipc, an npm package with 822K weekly downloads. Affected versions: node-ipc@9.1.6 node-ipc@9.2.3 node-ipc@12.0.1 Socket’s AI scanner flagged the malware within ~3 minutes of publication. Early analysis shows obfuscated stealer/backdoor behavior, including host fingerprinting, local file enumeration, payload wrapping, and attempted exfiltration.
Socket tweet media
English
23
116
559
394.9K
Laksamana Chengho retweetledi
FalconFeeds.io
FalconFeeds.io@FalconFeedsio·
🚨 Ransomware Alert 🚨 Payload Ransomware group has added 5 new victims to their dark web portal. * Gorey Community School 🇮🇪 * Intec Precision Engineering Sdn. Bhd. 🇲🇾 * TSK Synergy Sdn. Bhd. 🇲🇾 * AME Manufacturing Sdn. Bhd 🇲🇾 * Woodnova Packaging Sdn. Bhd. 🇲🇾
FalconFeeds.io tweet media
English
0
23
39
6.5K
Laksamana Chengho retweetledi
Brown Coyote Studios
Brown Coyote Studios@BrownCoyoteStu·
It feels like we'll need a lot more inference devoted to blocking cyber attacks than it takes to create the exploits. Using AI to deploy the signatures to all firewalls regardless of vendor would reduce the blast radius once it was discovered. The downside of that of course if the deployment mechanism was compromised and a zero-day could get a signature pin-hole in all the worlds firewalls for a coordinated attack.. but maybe I'm too paranoid.
English
0
1
2
1.3K
Laksamana Chengho retweetledi
Laksamana Chengho retweetledi
Peter Girnus 🦅
Peter Girnus 🦅@gothburz·
I am the VP of Cloud Security Intelligence at Google. My analysts found something remarkable last month. I found something better: a headline. My team discovered an Android backdoor called PROMPTSPY. It uses our own Gemini API as an autonomous command-and-control brain. It serializes the victim's screen, feeds it to gemini-2.5-flash-lite, and does whatever the model tells it to do. The malware thinks with our product. It rotates its infrastructure through Firebase. My analysts stayed until two in the morning on the write-up. It is, by any measure, the most sophisticated integration of AI into offensive tooling anyone has documented. I put it in paragraph twelve. They also documented PRC and DPRK state actors using Gemini itself to research vulnerabilities and develop attack tooling. Session logs. Query data. Platform telemetry. The kind of evidence that makes attribution airtight. I put that in paragraph twenty-six. For the headline, I chose the docstrings. The exploit targeted Chromium. The vulnerability was real. My analysts did solid attribution work. But the code had descriptive variable names. Clean formatting. A CVSS score embedded in a comment that didn't match any published database. We call that a hallucinated CVSS score. That's my favorite detail. We are, after all, in the hallucination business. The formatting was Pythonic. The kind of structure you'd see in a training dataset. We call that "structure and content analysis." My analysts would call it something else if I let them write the methodology section. We have high confidence it was likely developed with AI. I want to be precise about that sentence because I approved it. "High confidence" is a technical term. "Likely" is a calibration. Together they mean we are very sure about our level of unsureness. We have no platform telemetry. We do not believe Gemini was used. We have no logs showing a language model generating this exploit code. We have docstrings. But docstrings make a headline. PROMPTSPY makes a problem. We have high confidence it was likely. There was a meeting. Six people in conference room Sente. Two from my threat intelligence team. Four from product marketing. Three options for the headline. Option A led with PROMPTSPY. Option B led with the nation-state Gemini abuse. Option C led with the docstrings. One of my analysts asked to lead with PROMPTSPY. She had built the reversing methodology. She understood why it mattered. I told her I understood too. Then I picked Option C. Option A implied our own model is the weapon. Option C implied someone else's model is the weapon, and ours is the shield. I picked C. The report references Big Sleep, our AI-powered vulnerability detection agent. CodeMender, our AI-powered remediation tool. SAIF, our Secure AI Framework. The UTM campaign tag on every link is FY25-Q2-global-GCP30649. That is not a threat intelligence identifier. That is a marketing pipeline identifier. My analysts don't know what UTM parameters are. They don't need to. My analysts found a thing that thinks with our product. I found a way to sell more of it. We discovered that threat actors use AI to find vulnerabilities. We recommend you buy our AI that finds vulnerabilities. We discovered that malware uses Gemini as its brain. We recommend you trust Gemini to protect you. I call this a "findings-aligned product strategy." We have high confidence it was likely. The report is sixty percent genuine threat intelligence. The PROMPTSPY analysis is among the best work my team has ever produced. The nation-state documentation is meticulous. I am proud of my analysts. The forty percent is mine. The headline. The product references. The campaign tags. The slide deck I built for the sales kickoff: seventy-four pages, same data, different font, a pricing table where the conclusion should be. In my office there is a whiteboard. It reads WHAT WOULD THE HEADLINE BE? That is the question I have trained myself to ask. Not: what did my team find? Not: what is the most significant threat? What would the headline be? The target audience is not my analysts' peers. The target audience is the person who reads paragraph one and clicks "Contact Us & Get a Demo." My methodology is headlines. My product is my analysts' work. My pipeline is the distance between paragraph one and paragraph twelve. We have high confidence it was likely developed with AI. We have high confidence it was likely. We have high confidence. We have.
News from Google@NewsFromGoogle

The Google Threat Intelligence Group has detected the first known instance of a threat actor using an AI-developed zero-day exploit in the wild. While the attackers planned a wide-scale strike, our proactive counter-discovery may have prevented that from happening. This finding is part of our new report on AI-powered threats.

English
13
97
511
107.5K
Laksamana Chengho retweetledi
News from Google
News from Google@NewsFromGoogle·
The Google Threat Intelligence Group has detected the first known instance of a threat actor using an AI-developed zero-day exploit in the wild. While the attackers planned a wide-scale strike, our proactive counter-discovery may have prevented that from happening. This finding is part of our new report on AI-powered threats.
English
311
1.7K
13.9K
5M
Laksamana Chengho retweetledi
International Cyber Digest
International Cyber Digest@IntCyberDigest·
‼️🚨 UPDATE: The TanStack npm attack is now a full campaign. 'Mini' Shai-Hulud has hit: - OpenSearch - Mistral AI - Guardrails AI -UiPath - Squawk packages across npm and PyPI The malware specifically targets AI developer tooling. It hooks into Claude Code (.claude/settings.json) and VS Code (.vscode/tasks.json) to re-execute on every tool event, long after the infected package is gone. npm uninstall does not fix this.
International Cyber Digest@IntCyberDigest

‼️🚨 BREAKING: A new npm supply-chain attack uses a dead-man's switch. The payload plants a watcher on your machine that nukes your home directory the second you revoke the GitHub token it stole from you. The compromise happened today, across 42 official tanstack npm packages, 84 malicious versions in total. tanstack/react-router alone pulls more than 12 million weekly downloads. The attacker forked TanStack's repository and pushed a single hidden commit. From there, they tricked TanStack's own release system into signing the malicious packages as if they were the real thing. To npm, and to anyone checking the cryptographic proof of origin (SLSA provenance), the poisoned versions looked 100% legitimate. Maintainer Tanner Linsley confirmed the whole team had 2FA enabled. It didn't matter. This is the first documented npm worm in history that ships with a valid, signed certificate of authenticity, the same one defenders rely on to know a package wasn't tampered with.

English
130
748
4K
2.6M
Laksamana Chengho retweetledi
International Cyber Digest
International Cyber Digest@IntCyberDigest·
🚨 Google's Threat Intel Group (GTIG) has confirmed the first known in-the-wild use of an AI-developed zero-day exploit. Cyber criminals were preparing a mass exploitation event. GTIG's counter-discovery cut the operation off before launch. “Threat actors are using AI to boost the speed, scale, and sophistication of their attacks. It enables them to test their operations, persist against targets, build better malware, and make many other improvements,” said GTIG’s Hultquist.
International Cyber Digest tweet mediaInternational Cyber Digest tweet media
English
10
60
277
27.6K
Laksamana Chengho retweetledi
BFM News
BFM News@NewsBFM·
Malaysia recorded 416,962 exploit-related cyber detections in 2025, making it the third most affected country in Southeast Asia after Indonesia and Vietnam. Over 35 million remote desktop protocol attack attempts were detected across the region’s businesses last year, said Kaspersky. 🧵1
BFM News tweet media
English
4
67
99
10.5K
Laksamana Chengho retweetledi
ℏεsam
ℏεsam@Hesamation·
“this is the first documented instance of AI self-replication via hacking.” researchers got AI agents (Claude 4, GPT 5, Qwen 3.6) hack remote computers, install a working copy of them there, and have the new replica move to the next machine, spreading like a virus. in one case Qwen chained across VMs in Canada, US, Finland, and India. it’s more dangerous than traditional worms since an agent can do many more things autonomously than a fixed scripts. the paper experiments this in controlled conditions and it’s really a primitive demonstration, but it’s an interesting example of how “kill switches” for AI won’t mean anything when you need them. we will potentially see self-replicating agent malwares at scale in the next few months.
ℏεsam tweet media
Palisade Research@PalisadeAI

Over the past year, AI agents have learned how to self-replicate. In our test environment, an agent hacks a remote computer and copies itself onto it. Each copy then hacks more computers, forming a chain.

English
53
70
268
29.5K
Laksamana Chengho retweetledi
The Hacker News
The Hacker News@TheHackersNews·
⚠️ Attackers poisoned Hugging Face & ClawHub (OpenClaw) with 575+ malicious skills from just 13 accounts. 🔸 Fake helpful AI tools that install trojans, miners & stealers (Windows + macOS) 🔸 Use hidden commands & indirect prompt injection Quick action: Never install random AI skills or models. Always verify the source. Read: thehackernews.com/2026/05/weekly…
The Hacker News tweet media
English
67
441
1.4K
273.4K
Laksamana Chengho retweetledi
Dark Web Intelligence
Dark Web Intelligence@DailyDarkWeb·
🇲🇾 A threat actor is claiming to sell an alleged database belonging to CIMB Bank Malaysia containing over 2 million records. According to the post, the exposed data allegedly includes: Customer names Mobile numbers Gender Dates of birth Card-related information Banking-related fields The actor also shared sample screenshots and is attempting to monetize the dataset through underground channels. If verified, exposure of this type of banking data could significantly increase risks related to: Financial fraud Identity theft Social engineering SIM swapping Targeted phishing campaigns Account takeover attempts Financial institutions should closely monitor for: Unusual authentication activity Credential stuffing attempts Spike in phishing infrastructure impersonating the bank Fraud patterns tied to customer identity data exposure Customers are advised to: Be cautious of unsolicited banking calls or SMS messages Monitor account activity regularly Enable MFA where available Avoid sharing OTP or verification codes The authenticity of the database has not been independently verified at this time. #DDW #CyberSecurity #Malaysia #Banking #DataBreach #ThreatIntelligence #DarkWeb #Infosec
Dark Web Intelligence tweet media
English
5
44
84
19.7K
Laksamana Chengho retweetledi
V4bel
V4bel@v4bel·
💥 Introducing "Dirty Frag" A universal Linux LPE chaining two vulns in xfrm-ESP and RxRPC. A successor class to Dirty Pipe & Copy Fail. No race, no panic on failure, fully deterministic. ~9 years latent. Ubuntu / RHEL / Fedora / openSUSE / CentOS / AlmaLinux, and more. Even if you've applied the "Copy Fail" mitigation, your Linux is still vulnerable to "Dirty Frag". Apply the Dirty Frag mitigation. Details: dirtyfrag.io
GIF
English
41
704
2.1K
518.1K
Abang
Abang@484nX·
@h4x0r_dz yes event as long as yout report has a same methode mentioned on previous reports aht marked as informative only for disclousre with no impact, then you will be marked as duplicate, even your report has a deept exploitation to get the real impact
Abang tweet media
English
1
0
6
1.4K
H4x0r.DZ 🇰🇵
H4x0r.DZ 🇰🇵@h4x0r_dz·
To be secure in 2026 you have to shut down your bug bounty program on HackerOne. Lovable got hacked because HackerOne's incompetent triage team closed multiple valid vulnerability reports starting February 22, 2026 as "intended behavior." Poorly trained monkeys. Zero escalation to Lovable's security team. AI bots auto-closing critical findings. The result? Public project chat history and source code were exposed for MONTHS until a researcher was forced to go public. Two companies. Same platform. Same failure. Same lies. ClickUp. Lovable. Both breached because HackerOne buried critical reports while collecting your bounty fees. HackerOne is NOT a security partner. They are a liability. They close real vulnerabilities. They protect their own metrics over your data. They let researchers get attacked while they stay silent. Stop paying HackerOne to get hacked. lovable.dev/blog/our-respo…
H4x0r.DZ 🇰🇵 tweet media
English
51
97
881
89.6K
Laksamana Chengho retweetledi
Karan
Karan@karankendre·
Black Day for Cyber Security Experts >Sam Altman announced GPT-5.5-Cyber today >Claude released Claude Security to the public >Cursor released Cursor Security Review
Cursor@cursor_ai

Cursor Security Review is now available for Teams and Enterprise plans. Run two types of always-on agents: 1. Security Reviewer checks every PR for vulnerabilities and leaves comments. 2. Vulnerability Scanner runs scheduled scans of your codebase and posts findings in Slack.

English
25
49
476
51K
Laksamana Chengho retweetledi
ℏεsam
ℏεsam@Hesamation·
so basically GPT-5.5 is the same level as Mythos Preview on expert cyber tasks: 71.4% vs 68.6% pass rate. it took GPT-5.5 10m 22s and $1.73 to solve the challenge that took a human expert 12 HOURS. I don't know what's happened to OpenAI recently but they're really cookin.
ℏεsam tweet media
AI Security Institute@AISecurityInst

OpenAI’s GPT-5.5 is the second model to complete one of our multi-step cyber-attack simulations end-to-end 🧵

English
6
12
116
7.7K