
Taylor Hornby 🛡❤️
20.4K posts

Taylor Hornby 🛡❤️
@DefuseSec
Security research (https://t.co/xrmvhFVPtv), EDM (https://t.co/Ynq2DNWQa1), & board member @ Zcash Foundation.
Calgary, Canada Katılım Şubat 2012
1.4K Takip Edilen6.8K Takipçiler
Taylor Hornby 🛡❤️ retweetledi

- Drafted a blog post
- Used an LLM to meticulously improve the argument over 4 hours.
- Wow, feeling great, it’s so convincing!
- Fun idea let’s ask it to argue the opposite.
- LLM demolishes the entire argument and convinces me that the opposite is in fact true.
- lol
The LLMs may elicit an opinion when asked but are extremely competent in arguing almost any direction. This is actually super useful as a tool for forming your own opinions, just make sure to ask different directions and be careful with the sycophancy.
English
Taylor Hornby 🛡❤️ retweetledi
Taylor Hornby 🛡❤️ retweetledi
Taylor Hornby 🛡❤️ retweetledi

Software horror: litellm PyPI supply chain attack.
Simple `pip install litellm` was enough to exfiltrate SSH keys, AWS/GCP/Azure creds, Kubernetes configs, git credentials, env vars (all your API keys), shell history, crypto wallets, SSL private keys, CI/CD secrets, database passwords.
LiteLLM itself has 97 million downloads per month which is already terrible, but much worse, the contagion spreads to any project that depends on litellm. For example, if you did `pip install dspy` (which depended on litellm>=1.64.0), you'd also be pwnd. Same for any other large project that depended on litellm.
Afaict the poisoned version was up for only less than ~1 hour. The attack had a bug which led to its discovery - Callum McMahon was using an MCP plugin inside Cursor that pulled in litellm as a transitive dependency. When litellm 1.82.8 installed, their machine ran out of RAM and crashed. So if the attacker didn't vibe code this attack it could have been undetected for many days or weeks.
Supply chain attacks like this are basically the scariest thing imaginable in modern software. Every time you install any depedency you could be pulling in a poisoned package anywhere deep inside its entire depedency tree. This is especially risky with large projects that might have lots and lots of dependencies. The credentials that do get stolen in each attack can then be used to take over more accounts and compromise more packages.
Classical software engineering would have you believe that dependencies are good (we're building pyramids from bricks), but imo this has to be re-evaluated, and it's why I've been so growingly averse to them, preferring to use LLMs to "yoink" functionality when it's simple enough and possible.
Daniel Hnyk@hnykda
LiteLLM HAS BEEN COMPROMISED, DO NOT UPDATE. We just discovered that LiteLLM pypi release 1.82.8. It has been compromised, it contains litellm_init.pth with base64 encoded instructions to send all the credentials it can find to remote server + self-replicate. link below
English
Taylor Hornby 🛡❤️ retweetledi

@daveaitel All technical debt maturing at once is a nice analogy.
English
Taylor Hornby 🛡❤️ retweetledi
Taylor Hornby 🛡❤️ retweetledi
Taylor Hornby 🛡❤️ retweetledi

We at Symbolic Software just finished running 1,296 tests across 15 ML-KEM and ML-DSA implementations in search of vulnerabilities.
Our results, alongside Crucible, our open-source ML-KEM/ML-DSA testing harness, are documented in today's blog post: symbolic.software/blog/2026-03-2…
English
Taylor Hornby 🛡❤️ retweetledi
Taylor Hornby 🛡❤️ retweetledi
Taylor Hornby 🛡❤️ retweetledi

the real lesson isn't which init system to use - it's that any component deeply embedded in every linux distro becomes a policy enforcement point. today it's age verification in userdb. tomorrow it's whatever the next mandate requires. the architecture of your system determines what the state can demand from it.
English
Taylor Hornby 🛡❤️ retweetledi
Taylor Hornby 🛡❤️ retweetledi

The way I remember it is that Red on the Right wing would be two R’s & easy to remember, so it’s definitely not that.
Untold Secrets@RealGemsfinder
English
Taylor Hornby 🛡❤️ retweetledi

🚨 Hackers are using large language models to scan EVM contracts at scale, finding vulnerabilities in code deployed years ago.
The attack vector: tokens you approved to DeFi contracts 6+ years ago.
When you approved that contract back then, you gave it permanent access to move your tokens. That approval still exists.
If the contract has a vulnerability, attackers drain your tokens without triggering any warning or requiring a new signature.
Hardware wallets protect your private keys, but not against contracts you already authorized.
The fix takes 2 minutes:
- Visit Revoke.cash or @Rabby_io
- Check active approvals
- Revoke unused ones
This is happening right now.
Multiple exploits in the past month alone.
Stay sharp 🫡
deebeez@deeberiroz
A hacker (likely LLM assisted) is exploiting old contracts on Ethereum mainnet that have signature verification logic 🧵
English
Taylor Hornby 🛡❤️ retweetledi

Prompt injection is fundamentally a LangSec problem. Determining what portions of a single input stream are data and what portions are instructions in completely freeform text is a parsing problem, and the inputs here aren’t context free or some other easily parsed language, so the AI inevitably is going to make errors. A permanent fix requires a mechanism to provide strong separation.
Humans don’t have problems with this because we can distinguish different input streams. I might be able to impersonate your boss’s voice, but I can’t convince you that what you’re reading in a book is something that you’re hearing on a telephone from your boss. We can tell from our environment where input is coming from, and so we can separate the streams.
An LLM has only a single linear token input stream, and so the same security problems you get with in-band transmission of commands with data, which we’ve faced over and over in computer science, apply here, with the same bad results.
LangSec is one of the least appreciated developments in computer security, and the issue here is classic LangSec and requires the usual LangSec tools to fix.
(And if you don’t know what LangSec is, ask your robot friend for an explanation.)
By the way, I will note that this is a problem that absolutely could not have been anticipated before people sat down and grappled with AI systems in the real world; it is retrospectively obvious, but so
many things are retrospectively obvious. We didn’t even know that we would have LLM systems doing any of the things that they do now when people say down to first try to building them.
We do not perfect technologies by staring at our navels, we perfect them by building, discovering issues, and repeating.
English
Taylor Hornby 🛡❤️ retweetledi
Taylor Hornby 🛡❤️ retweetledi
Taylor Hornby 🛡❤️ retweetledi

I’m going to keep harping on this. Meta’s push to remove encryption and lobby for age verification laws feel like two sides of a very ominous coin.
Matthew Green@matthew_d_green
Meta appears to be reversing its strong stance on encryption. The first obvious casualty is that they’re abandoning and disabling end-to-end encryption in Instagram DMs.
English
Taylor Hornby 🛡❤️ retweetledi







