Antonio Viggiano

4.5K posts

Antonio Viggiano banner
Antonio Viggiano

Antonio Viggiano

@aviggiano

security @monad

Katılım Haziran 2017
1.3K Takip Edilen2.8K Takipçiler
Sabitlenmiş Tweet
Antonio Viggiano
Antonio Viggiano@aviggiano·
I’m happy to share that I’ve joined the security team at @monad Monad Foundation! I’ve been really impressed by the team, their technical depth, and their ability to execute, and am excited to contribute to making the ecosystem more secure.
Antonio Viggiano tweet media
English
112
3
395
20.9K
nisedo
nisedo@nisedo_·
I vibe-clauded a fully functional Medusa harness for a 10k+ LOC Solidity codebase what a time to be alive
English
3
0
21
2.3K
Antonio Viggiano
Antonio Viggiano@aviggiano·
@lonelysloth_sec > 6.b. Force it to PoC and retry repeatedly, enforcing success conditions. This turns it into a (very expensive and slow) fuzzer! I think this can be a very effective methodology and quite profitable, as long as you pick the right target just loop if EV(target) > LLM cost
English
1
0
1
97
LonelySloth
LonelySloth@lonelysloth_sec·
After many tests around LLM use in bug hunting, and taking into consideration all my experiences/study in AI in the past few months I arrived at some conclusions. And I'll make some predictions: 1. Every new model will be followed by a wave of new bug findings in a short time that will get people very excited. Followed by a period of very few findings. 2. Those waves will get smaller and smaller, until basically there's no improvement. 3. The reason isn't that the code is becoming bug free -- it's because the % of bugs that **can be found** by LLMs is quite small. Why? 1. The model has no idea how the code works -- you can catch it making ridiculous statements about the code all the time. 2. It has no idea how the EVM works either -- it misrepresents basic facts about the EVM all the time. 3. The way it finds bugs is basically hallucinating credible-sounding exploits. If there is a bug and it is typical enough, sometimes the hallucination matches reality. 4. Even very easy, very typical bugs, can be missed if slightly obscured. 5. Matching the actual threat model is hard so the severity is basically a random guess most of the time. 6. You can improve all of the above in two ways: 6.a. Make extensive prompts/skills telling exactly what it should look for. You just turned the supposedly generic auditor into a (very expensive and slow) static analyzer! 6.b. Force it to PoC and retry repeatedly, enforcing success conditions. This turns it into a (very expensive and slow) fuzzer! You can combine both for better results. 7. It's useful but it is still just a static analyzer + fuzzer. An incremental expansion on the existing state of the art tooling. When you don't know what tools to use or dont have time to find out, they will be very useful -- and that's maybe a lot of value -- but it doesn't change the nature of what's going on. 8. People telling you it's doing what an auditor does, replaces humans, yadda yadda yadda -- they are either clueless, deluded, or deliberately misleading. 9. BTW humans hunting for bugs don't just try to look for known bug patterns -- the known bug patterns are compiled from findings by humans **who actually understood how the code works** and found the bug without anyone telling them what they should be looking for. That's the "research" part in Security Researcher. 10. Most of the known patterns were discovered independently by multiple SRs sometimes years before becoming public knowledge. Sometimes it becomes public knowledge after a black hat discovers it and steals millions (you probably dont want to be the target of that research!) 11. Any human or machine that keeps just trying to match known patterns against code bases will miss **A LOT** of bugs. 12. Finding bugs is crazy hard. Writing bug-free code is even harder. There is no silver bullet. AI isn't magical. Nor is it "automating human cognition". 13. Life is always unfair. More so in a bear market. 14. If you think someone will hand you on X a solution so you can find bugs easily OR so you don't have to spend a lot of effort/money on securing your code... We'll things are not gonna work great for you.
English
8
32
278
27.9K
V
V@kxrd36·
i found chain halting bug on a network carrying $120M in stablecoins and $1.85B in market value. anyway, good morning! @HackenProof #HackenProof
V tweet media
English
32
17
419
12.1K
Antonio Viggiano
Antonio Viggiano@aviggiano·
I’ve been told this book is good
Antonio Viggiano tweet media
English
0
0
6
369
Antonio Viggiano
Antonio Viggiano@aviggiano·
@33audits I don’t think so. Even at $1k/mo, audit speed is a valuable differentiator
English
0
0
1
164
Lee | 33Audits
Lee | 33Audits@33audits·
wait... what happens to our AI audit agents when claude raises the max price from $200/m to $1000/m? do we start shilling manual reviews again?
English
10
0
32
3.7K
Antonio Viggiano
Antonio Viggiano@aviggiano·
I’m happy to share that I’ve joined the security team at @monad Monad Foundation! I’ve been really impressed by the team, their technical depth, and their ability to execute, and am excited to contribute to making the ecosystem more secure.
Antonio Viggiano tweet media
English
112
3
395
20.9K