Taylor Hornby 🛡❤️

20.4K posts

Taylor Hornby 🛡❤️ banner
Taylor Hornby 🛡❤️

Taylor Hornby 🛡❤️

@DefuseSec

Security research (https://t.co/xrmvhFVPtv), EDM (https://t.co/Ynq2DNWQa1), & board member @ Zcash Foundation.

Calgary, Canada Katılım Şubat 2012
1.4K Takip Edilen6.8K Takipçiler
Taylor Hornby 🛡❤️ retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
- Drafted a blog post - Used an LLM to meticulously improve the argument over 4 hours. - Wow, feeling great, it’s so convincing! - Fun idea let’s ask it to argue the opposite. - LLM demolishes the entire argument and convinces me that the opposite is in fact true. - lol The LLMs may elicit an opinion when asked but are extremely competent in arguing almost any direction. This is actually super useful as a tool for forming your own opinions, just make sure to ask different directions and be careful with the sycophancy.
English
1.2K
1.6K
20.9K
1.5M
Taylor Hornby 🛡❤️ retweetledi
Juliano Rizzo
Juliano Rizzo@julianor·
Treat your development environment as disposable: sandbox it, and keep secrets out. Between supply chain attacks and agents, you have to assume compromise. Ironically, despite having the most powerful hardware ever, it is safer to use it as a dumb terminal.
English
0
2
4
652
Taylor Hornby 🛡❤️ retweetledi
Trail of Bits
Trail of Bits@trailofbits·
93% recall vs 50% for baseline prompts. Our new dimensional-analysis plugin for Claude Code doesn't ask it to find bugs. It annotates your codebase with dimensional types, then flags mismatches mechanically. 🧵
English
4
19
158
40.7K
Taylor Hornby 🛡❤️ retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
Software horror: litellm PyPI supply chain attack. Simple `pip install litellm` was enough to exfiltrate SSH keys, AWS/GCP/Azure creds, Kubernetes configs, git credentials, env vars (all your API keys), shell history, crypto wallets, SSL private keys, CI/CD secrets, database passwords. LiteLLM itself has 97 million downloads per month which is already terrible, but much worse, the contagion spreads to any project that depends on litellm. For example, if you did `pip install dspy` (which depended on litellm>=1.64.0), you'd also be pwnd. Same for any other large project that depended on litellm. Afaict the poisoned version was up for only less than ~1 hour. The attack had a bug which led to its discovery - Callum McMahon was using an MCP plugin inside Cursor that pulled in litellm as a transitive dependency. When litellm 1.82.8 installed, their machine ran out of RAM and crashed. So if the attacker didn't vibe code this attack it could have been undetected for many days or weeks. Supply chain attacks like this are basically the scariest thing imaginable in modern software. Every time you install any depedency you could be pulling in a poisoned package anywhere deep inside its entire depedency tree. This is especially risky with large projects that might have lots and lots of dependencies. The credentials that do get stolen in each attack can then be used to take over more accounts and compromise more packages. Classical software engineering would have you believe that dependencies are good (we're building pyramids from bricks), but imo this has to be re-evaluated, and it's why I've been so growingly averse to them, preferring to use LLMs to "yoink" functionality when it's simple enough and possible.
Daniel Hnyk@hnykda

LiteLLM HAS BEEN COMPROMISED, DO NOT UPDATE. We just discovered that LiteLLM pypi release 1.82.8. It has been compromised, it contains litellm_init.pth with base64 encoded instructions to send all the credentials it can find to remote server + self-replicate. link below

English
1.3K
5.4K
28K
65.8M
Taylor Hornby 🛡❤️ retweetledi
Halvar Flake
Halvar Flake@halvarflake·
@daveaitel All technical debt maturing at once is a nice analogy.
English
3
4
57
4.5K
Taylor Hornby 🛡❤️ retweetledi
Dave Aitel
Dave Aitel@daveaitel·
Fwiw the problem was never that AI slop was going to overwhelm security teams: the problem was that having their hidden technical debt all called in at once was going to overwhelm them. Chrome having as many bugs as it still does is the perfect case example.
English
9
32
177
15.3K
Taylor Hornby 🛡❤️ retweetledi
Science girl
Science girl@sciencegirl·
Artist Keisuke Teshima paints the body of a dragon in a single stroke, a traditional technique called called ippitsuryu
English
111
978
14.1K
1.4M
Taylor Hornby 🛡❤️ retweetledi
Nadim Kobeissi
Nadim Kobeissi@kaepora·
We at Symbolic Software just finished running 1,296 tests across 15 ML-KEM and ML-DSA implementations in search of vulnerabilities. Our results, alongside Crucible, our open-source ML-KEM/ML-DSA testing harness, are documented in today's blog post: symbolic.software/blog/2026-03-2…
English
4
12
44
2.9K
Taylor Hornby 🛡❤️ retweetledi
DarkFi Squad
DarkFi Squad@DarkFiSquad·
Anonymity allows you to donate to a cause before it becomes popular. Before it's safe. When it actually matters.
English
2
9
75
1.7K
Taylor Hornby 🛡❤️ retweetledi
Zchat | Shielded messenger
Zchat | Shielded messenger@zchat_app·
the real lesson isn't which init system to use - it's that any component deeply embedded in every linux distro becomes a policy enforcement point. today it's age verification in userdb. tomorrow it's whatever the next mandate requires. the architecture of your system determines what the state can demand from it.
English
2
5
30
2.6K
Taylor Hornby 🛡❤️ retweetledi
Matthew Green
Matthew Green@matthew_d_green·
A lot of people think the solution to “private AIs” is to just TEEs. This is already the approach being deployed by Meta, Apple and Google. I think that’s important, but not really a solution. The problem is that for agentic AI, agents need to interact with the real world.
English
18
15
100
11.3K
Taylor Hornby 🛡❤️ retweetledi
Keystone Hardware Wallet
Keystone Hardware Wallet@KeystoneWallet·
🚨 Hackers are using large language models to scan EVM contracts at scale, finding vulnerabilities in code deployed years ago. The attack vector: tokens you approved to DeFi contracts 6+ years ago. When you approved that contract back then, you gave it permanent access to move your tokens. That approval still exists. If the contract has a vulnerability, attackers drain your tokens without triggering any warning or requiring a new signature. Hardware wallets protect your private keys, but not against contracts you already authorized. The fix takes 2 minutes: - Visit Revoke.cash or @Rabby_io - Check active approvals - Revoke unused ones This is happening right now. Multiple exploits in the past month alone. Stay sharp 🫡
deebeez@deeberiroz

A hacker (likely LLM assisted) is exploiting old contracts on Ethereum mainnet that have signature verification logic 🧵

English
11
73
313
66.4K
Taylor Hornby 🛡❤️ retweetledi
Perry E. Metzger
Perry E. Metzger@perrymetzger·
Prompt injection is fundamentally a LangSec problem. Determining what portions of a single input stream are data and what portions are instructions in completely freeform text is a parsing problem, and the inputs here aren’t context free or some other easily parsed language, so the AI inevitably is going to make errors. A permanent fix requires a mechanism to provide strong separation. Humans don’t have problems with this because we can distinguish different input streams. I might be able to impersonate your boss’s voice, but I can’t convince you that what you’re reading in a book is something that you’re hearing on a telephone from your boss. We can tell from our environment where input is coming from, and so we can separate the streams. An LLM has only a single linear token input stream, and so the same security problems you get with in-band transmission of commands with data, which we’ve faced over and over in computer science, apply here, with the same bad results. LangSec is one of the least appreciated developments in computer security, and the issue here is classic LangSec and requires the usual LangSec tools to fix. (And if you don’t know what LangSec is, ask your robot friend for an explanation.) By the way, I will note that this is a problem that absolutely could not have been anticipated before people sat down and grappled with AI systems in the real world; it is retrospectively obvious, but so many things are retrospectively obvious. We didn’t even know that we would have LLM systems doing any of the things that they do now when people say down to first try to building them. We do not perfect technologies by staring at our navels, we perfect them by building, discovering issues, and repeating.
English
11
9
62
3.9K
Taylor Hornby 🛡❤️ retweetledi
daine
daine@notdaine·
most underrated music video of 2026 i really can’t get over how sick it is
English
19
127
1.4K
42.2K
Taylor Hornby 🛡❤️ retweetledi
samczsun
samczsun@samczsun·
crypto will have truly matured when we can stop using telegram
English
38
15
305
25.2K
Taylor Hornby 🛡❤️ retweetledi
DarkFi Squad
DarkFi Squad@DarkFiSquad·
Intimacy requires privacy. Always has.
English
9
9
68
2K