Matthew Green 🌻

2.8K posts

Matthew Green 🌻 banner
Matthew Green 🌻

Matthew Green 🌻

@mgreen27

#DFIR and research.

Sydney, Australia Katılım Ağustos 2010
1.4K Takip Edilen2.2K Takipçiler
Matthew Green 🌻 retweetledi
vogel
vogel@ryanvogel·
everyone is trying to build async agents that work when they sleep but all they really need are Australians
English
87
106
1.7K
77.6K
Matthew Green 🌻 retweetledi
YungBinary
YungBinary@YungBinary·
New blog! We found an open directory attributed to #MuddyWater Iranian APT and found vulnerabilities/victims they've been targeting, red-team tools, and a loader that deploys a persistent variant of #Tsundere botnet - a MaaS sold by a Russian threat actor that is known for using #EtherHiding to store C2 addresses on the Ethereum blockchain. esentire.com/blog/muddywate…
YungBinary tweet mediaYungBinary tweet media
English
1
31
145
10.3K
Matthew Green 🌻 retweetledi
Brian Roemmele
Brian Roemmele@BrianRoemmele·
The transmission is being sent. I take it seriously.
English
642
4.4K
28.3K
2M
Matthew Green 🌻 retweetledi
Kostas
Kostas@Kostastsale·
Today I’m launching Threat Hunting Labs. Over the years I’ve analyzed many real-world intrusions. One thing became obvious: most training platforms don’t resemble how investigations actually happen. So I built something different. Threat Hunting Labs focuses on investigation-driven learning using real telemetry and structured investigative paths. If you want to get better at investigating breaches, you should practice investigating breaches. More details here: threathuntinglabs.com/blog/introduci…
English
21
115
588
44.9K
Matthew Green 🌻 retweetledi
Curtis
Curtis@cybershtuff·
🚨 Possible first Iranian wiper activity since the start of the war. Handala (MOIS-linked) claims targeting Stryker Corporation, reportedly pushing a wiper to Intune-managed endpoints. Now, who's got samples for analysis?
English
6
13
105
20.7K
Matthew Green 🌻 retweetledi
Guri Singh
Guri Singh@heygurisingh·
Holy shit... Microsoft open sourced an inference framework that runs a 100B parameter LLM on a single CPU. It's called BitNet. And it does what was supposed to be impossible. No GPU. No cloud. No $10K hardware setup. Just your laptop running a 100-billion parameter model at human reading speed. Here's how it works: Every other LLM stores weights in 32-bit or 16-bit floats. BitNet uses 1.58 bits. Weights are ternary just -1, 0, or +1. That's it. No floats. No expensive matrix math. Pure integer operations your CPU was already built for. The result: - 100B model runs on a single CPU at 5-7 tokens/second - 2.37x to 6.17x faster than llama.cpp on x86 - 82% lower energy consumption on x86 CPUs - 1.37x to 5.07x speedup on ARM (your MacBook) - Memory drops by 16-32x vs full-precision models The wildest part: Accuracy barely moves. BitNet b1.58 2B4T their flagship model was trained on 4 trillion tokens and benchmarks competitively against full-precision models of the same size. The quantization isn't destroying quality. It's just removing the bloat. What this actually means: - Run AI completely offline. Your data never leaves your machine - Deploy LLMs on phones, IoT devices, edge hardware - No more cloud API bills for inference - AI in regions with no reliable internet The model supports ARM and x86. Works on your MacBook, your Linux box, your Windows machine. 27.4K GitHub stars. 2.2K forks. Built by Microsoft Research. 100% Open Source. MIT License.
English
878
2.7K
15.4K
2.2M
Matthew Green 🌻 retweetledi
Mr.Z
Mr.Z@zux0x3a·
I am releasing a new toolkit I built for IIS-based lateral movement and code execution within IIS worker pool process's memory. Phantom ASPX Loader & PhantomLink -- a two-part toolkit for reflectively loading native DLLs into IIS w3wp.exe worker processes via ASPX. github.com/zux0x3a/Phanto…
GIF
English
4
75
248
13.9K
Matthew Green 🌻 retweetledi
Josh Stroschein | The Cyber Yeti
⏪Dynamic analysis has come a long way. Time-travel debugging (TTD) is a great example - it allows you to query execution information instead of relying on break/resume to find what you are looking for! - Full OS interaction - Forwards/backwards navigation from the trace - Scriptable (JS/LINQ) - That WinDbg UI everyone loves! 🤔 But how can TTD help with .NET malware? Easy—it lets you trace the transition from managed code to unmanaged API calls. We broke down these techniques to unravel .NET process hollowing in a recent blog post: cloud.google.com/blog/topics/th… This is just one example of the content we're bringing to #BlackHatAsia! 🇸🇬 #the-flare-teams-guide-to-reverse-engineering-modern-malware-49607" target="_blank" rel="nofollow noopener">blackhat.com/asia-26/traini… @BlackHatEvents
English
0
30
152
10.3K
Matthew Green 🌻
Matthew Green 🌻@mgreen27·
@HackingDave Looks awesome. AI is reducing barriers of entry so much, so much good stuff able to be developed quickly.
English
0
0
1
124
Dave Kennedy
Dave Kennedy@HackingDave·
Here's a demo on a project I've been developing and working on for the past 9 months. Called NightBeacon. Using it now in production, getting released fully this week. Our own internally trained models on our own infrastructure (no third party). Trained on our analysts knowledge and behavior (TP/FPs retrain model to be smarter with context). Handles emails (including tonality), attachments, various malicious filetypes (DLL/exe/svg/lnk/etc). Can send it full evtx exports, packet dumps, zip files, whatever. Universal log handler can parse any log from any source, EDR, SIEM, etc. Deep-Scan / sandbox detonation + shellcode emulation with IOC extraction automatically. Automatic playbook generation, full AI-based recommendations custom to the attack. Synthetic training data layer - meaning when it trains on a specific attack at a customer, generates training data based on the customers data but never has any of the actual data or information about the customer in it. No customer information. For areas its weak at, bubbles up and automatically kicks off research to become smarter on a specific topic. Supports GenAI based rulesets (to improve confidence), over 900+ YARA rules, full MITRE ATT&CK integration. Integrated into our SOAR - enriches data, creates playbooks for analysts, MTTR reduces substantially, false positives reduced, true positive escalations. Not using our MDR service? Can integrate into your EDR or SIEM for automatic enrichment and escalation of attacks. Built to help respond faster. More accurately. Be intelligent based on our analysts intelligence. Stop attackers much much faster. Coming soon.. #BinaryDefense
English
15
19
188
12.8K
Matthew Green 🌻 retweetledi
Tom Hegel
Tom Hegel@TomHegel·
NEW DROP: A look at using LLMs to turn CTI narratives into structured knowledge graphs, complete with empirical evaluation across GPT-4.1, GPT-5, Claude Sonnet/Opus and more. If you're building or evaluating AI-augmented CTI pipelines, this one's worth your time. Great work from @milenkowski & Razvan Gabriel Cirstea: sentinelone.com/labs/from-narr…
English
1
17
96
7K
Matthew Green 🌻 retweetledi
Yogesh Londhe
Yogesh Londhe@suyog41·
emp3r0r A post-exploitation framework for Linux/Windows agent_windows_2023-3-20_15-15-24.exe 345f176e287b708b4cae6d9e98742e80 #emp3r0r #GO #IOC
Yogesh Londhe tweet media
English
0
6
13
1.6K
Matthew Green 🌻 retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
I packaged up the "autoresearch" project into a new self-contained minimal repo if people would like to play over the weekend. It's basically nanochat LLM training core stripped down to a single-GPU, one file version of ~630 lines of code, then: - the human iterates on the prompt (.md) - the AI agent iterates on the training code (.py) The goal is to engineer your agents to make the fastest research progress indefinitely and without any of your own involvement. In the image, every dot is a complete LLM training run that lasts exactly 5 minutes. The agent works in an autonomous loop on a git feature branch and accumulates git commits to the training script as it finds better settings (of lower validation loss by the end) of the neural network architecture, the optimizer, all the hyperparameters, etc. You can imagine comparing the research progress of different prompts, different agents, etc. github.com/karpathy/autor… Part code, part sci-fi, and a pinch of psychosis :)
Andrej Karpathy tweet media
English
1K
3.6K
28.2K
10.8M
Matthew Green 🌻 retweetledi
Jamie Levy🦉
Jamie Levy🦉@gleeda·
🧵 We recently had an incident that involved a MuddyWater hands-on attacker who couldn't spell "administrators" Full timeline breakdown below. 1/
Jamie Levy🦉 tweet media
English
14
74
361
54.4K
Matthew Green 🌻
Matthew Green 🌻@mgreen27·
@Steph3nSims As a natural builder I’ve mostly been positive and used AI as a huge time saver. I think with experience and being able to tinker make techncial cyber people super valuable in the age of AI.
English
0
0
2
522
Matthew Green 🌻 retweetledi
Stephen Sims
Stephen Sims@Steph3nSims·
I want to share a quick thought for people in cyber security. This will be my longest tweet ever. I’ve spoken to many lately who are having an existential crisis from the constant posts about “the end of cybersecurity jobs.” Yes, things are changing quickly. This is a significant moment for the tech industry. Change can be uncomfortable. But we’ve seen cycles like this before. • When GitHub and open source took off, people said software engineers would disappear because code was free. • When AWS and cloud computing emerged, people said infrastructure jobs would vanish. • When fuzzing and SAST tools improved, people said vulnerability research would disappear. • Virtualization would eliminate infrastructure jobs. • Mobile computing was going to end desktop dev. • Exploit mitigations would end exploitability. It didn't. Each time automation improved, the amount of software grew faster than the automation. It does feel "different" this time as it's explosive. Some roles will shrink: • repetitive pentesting • basic vulnerability scanning • tier-1 SOC monitoring But other areas are expanding rapidly: • AI system security • supply chain security • identity architecture • autonomous agent security • critical infrastructure protection Historically, every time we eliminate one class of bugs, new classes emerge. Right now people are vibe-coding entire systems, giving AI access to their machines, crossing trust boundaries, and deploying autonomous agents with excessive permissions. The legal and regulatory world is nowhere close to ready. There will absolutely be new failure modes. Humans are amazing and always adapt, finding new ways to do things. The worst thing you can do right now is fall into a doom loop. ...and I’ll be honest, I too have felt the "psychological paralysis" a few times thinking, “Is this time different?” It's especially impactful when it comes from someone I respect in the community. There are certainly unknowns, in an industry where we've become accustomed to predictability. But... the majority of those reactions are usually driven by social media, not reality. Platforms like X reward engagement, and sensational doom posts spread faster than measured thinking. If you see something like: “Holy #$%^! Opus 66.6 just found every bug in Chrome and replaced 50 startups!” …mute it and move on. Instead: Stay curious. Learn the new technology. Adapt your skillsets. Build things. We’ll get through this transition the same way we always have. If I'm wrong then Sam Altman better be right about UBI! :) I'm sure that if this tweet gets any engagement that I'll get some heat for it, but a good friend of mine reminds me often to focus on what you have control over. I'll revisit this tweet at DEF CON 40!
English
55
315
1.5K
127.3K