BugBlow

139 posts

BugBlow banner
BugBlow

BugBlow

@BugBlow

Protecting DeFi and Web3 with indispensable cybersecurity skills. Conducting security audits.

Katılım Temmuz 2024
95 Takip Edilen262 Takipçiler
BugBlow
BugBlow@BugBlow·
@CertiKAlert Security will always be the foundation of long-term crypto adoption, we gotta always keep to that
English
1
0
1
13
CertiK Alert
CertiK Alert@CertiKAlert·
#CertiKInsight 🚨 We previously highlighted that France had the highest number of wrench attacks in 2025 with a total of 19 attacks. Since the beginning of the year, France has recorded at least 13 wrench attacks, making it the country most affected by this type of incident.
CertiK Alert tweet media
English
17
7
21
3.1K
BugBlow
BugBlow@BugBlow·
Some auditors think the idea is to find as many bugs as you can. Wrong. The idea is to make it so that there are no bugs after your audit
English
2
0
2
27
WEB4GREAT
WEB4GREAT@web4great·
Quick question: What’s more important for a new Web3 project? Marketing or Security? Also personally , I think platforms like @BugBlow are solving one of the biggest problems in the space fr
English
5
0
20
351
BugBlow
BugBlow@BugBlow·
We're giving AI coding agents full shell access to our machines. Cursor, Claude Code, Copilot - they can read, write, and execute anything. Yes, they ask before running commands. But the permissions are already there. The confirmation is UI, not security. Now imagine the company behind one of these tools gets breached. What runs on your machine next?
English
0
0
4
124
BugBlow
BugBlow@BugBlow·
@0xKaden Wrong, the correct prompt is "make no mistake"
English
0
0
2
74
kaden.eth
kaden.eth@0xKaden·
i've learned the secret to ai auditing just end every prompt with, "don't stop until you find a critical vulnerability"
English
8
1
72
4.3K
BugBlow
BugBlow@BugBlow·
Here's my offer: we take a random smart contract. Your agent audits it, then my team does. We commit hashes of our reports, then reveal the reports. If my team finds fewer or an equal number of vulnerabilities (except 0), I publicly acknowledge that your agent is better. And pay you a $100.
archethect 🏴@archethect

Last week I sat down with a senior smart contract auditor from a top-tier security firm and tested my AI agent plugin on contracts his team had professionally audited. The AI independently flagged the same HIGH severity vulnerability that the audit team found. No hints. No context about previous findings. Just raw contract code. Here's what I built 🧵

English
1
0
6
174
Louis Nyffenegger
Louis Nyffenegger@snyff·
AppSec is over! Just tell your developers to add "Make no mistakes" to their prompt 💥🤯
English
14
18
153
10K
BugBlow
BugBlow@BugBlow·
@0x3b33 Any data to back it up? :)
English
1
0
1
40
Pyro
Pyro@0x3b33·
Most people think hacks are done by malicious actors, which is true, but mostly for the big ones. Small hacks are more commonly done by MEV bots who find exploitative ways to interact with the code and drain it themselves, without any human interaction from their end.
English
2
0
20
1.3K
BugBlow
BugBlow@BugBlow·
@derikwebx I stare at code until it confesses it has a bug
English
1
0
1
25
Derik | Dev
Derik | Dev@derikwebx·
What do you do in web3 ? I’m a builder You ?
English
59
2
106
2.7K
BugBlow
BugBlow@BugBlow·
@summeryue0 You shouldn't trust an LLM with your personal data.
English
0
0
1
44
Summer Yue
Summer Yue@summeryue0·
Nothing humbles you like telling your OpenClaw “confirm before acting” and watching it speedrun deleting your inbox. I couldn’t stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb.
Summer Yue tweet mediaSummer Yue tweet mediaSummer Yue tweet media
English
2.4K
1.7K
17.5K
10M
BugBlow
BugBlow@BugBlow·
@alexhooketh I know a guy who replies similarly. His project gets drained like 3 times a year
English
0
0
6
826
Alex Hook
Alex Hook@alexhooketh·
AI auditors are coming
Alex Hook tweet media
Català
25
4
146
31K
BugBlow
BugBlow@BugBlow·
@darkp0rt @Ehsan1579 I think the point here is that devs will be using AI to push a more secure code, where AI won't be able to find bugs, but people will
English
0
0
1
37
Paul Price
Paul Price@darkp0rt·
@Ehsan1579 “Will it find bugs that can cause multi-million dollar losses, very unlikely” You realize this has already happened? Multiple times.
English
3
0
2
1K
Ehsan
Ehsan@Ehsan1579·
It honestly baffles me how naive some people are in this field. Cybersecurity is massive, it’s not one lane. You’ve got web2, web3, reverse engineering, appsec, infra, hardware, and a dozen other deep specializations. The idea that one generic tool can be “specialized” to cover every level of security research just screams inexperience. If you’ve spent any real time doing this work, consistently finding real issues, you’d know better. A lot of people think I’m against the AI in general for auditing that’s not true, I have high hopes for it, but no this tool isn’t going to revolutionize an industry this complex, will it find regular bugs on a day to day basis in a codebase, yes, will it find bugs that can cause multi-million dollar losses, very unlikely. Be for real now. And let me be clear, this tool will specifically be absolutely terrible in web3 security research lmao. Smart contracts auditing is extremely complex because the types of vulnerabilities in solidity are completely different.
Ehsan@Ehsan1579

If you seriously think this is gonna find some live complex vulnerability, you’re tripping lmao.

English
12
9
173
22.7K
BEEBRAIN
BEEBRAIN@beebrain123·
I want to change my circle. I’m looking for Web3 people who are: • Friendly • Smart • Funny • Serious about growth • Hungry for results You can be new. That’s fine. If you’re building and thinking long term, let’s connect.
English
274
16
483
11.5K
BugBlow
BugBlow@BugBlow·
More and more people are integrating LLMs into their apps. The other day, some friends asked me to audit their LLM integration. A marketplace where every product must be unique. To catch duplicates, they plugged in an LLM as a validator. The LLM became an attack surface.🧵
BugBlow tweet media
English
4
4
13
296
BugBlow
BugBlow@BugBlow·
If you have an LLM built into your app, chances are, you might already be vulnerable. By the way, we built a playground where you can test everything yourself - run the attacks, tweak prompts, see what breaks. github.com/BugBlow/case-s… Full article: bugblow.com/blog/llm-promp…
English
0
0
1
68
BugBlow
BugBlow@BugBlow·
The fix has to be architectural. 1. User input is untrusted. Always. 2. Don't mix it with system instructions in the same context. 3. LLM output is a recommendation, not a decision. Verify externally. 4. Every tool available to the LLM is a tool available to the attacker
English
1
0
2
73