gh0xt
258 posts

gh0xt
@Taridoku
Web3 Security Researcher | Hacking solidity | Intern @burraSec GitHub: https://t.co/aLTo0sY3UD





It seems a @tradingprotocol vault, i.e., YieldCore-3rd-deal, was exploited with $398k loss. There is a missing check on the caller authorization, which is exploited to drain all funds from the vault. Here is the related tx: etherscan.io/tx/0x6b04344d5…

One of the toughest months Web3 has faced. April 2026: • 30+ security incidents • ~$630m drained This chart shows the hacked projects, estimated losses, and the cause behind each incident.

@adeolRxxxx @AftermathFi Secondly, even those that do have bounties, bad experiences on bounties keep screaming at everyone, plus, because of AI slop, the barrier to entry has been raised. So not a lot of people aren’t looking.







Most security firms are quietly moving away from audit competitions. This is one of the biggest mistakes happening in crypto security right now. There is a simple way to think about audit value: what does it cost to find a critical vulnerability? We looked at the actual data on what it costs to find critical bugs in crypto, and the numbers are not surprising. Finding a critical vulnerability in an audit competition costs $6,548 on average. The exact same severity bug through a bug bounty program costs $114,000. That is 17x more expensive for the same result. Now look at the traditional audit model. Some top firms charge $100 per line of code. Others charge as high as $25,000 per auditor per week. A single engagement can easily run $200k to $500k+, and you are getting maybe 2 to 4 people looking at your code. But cost per critical is not even the most interesting part. The interesting part is the structure of who is looking at your code. When you hire a firm, you get 2 to 4 auditors. Maybe they are great. Maybe one of them is having a bad week. You are making a concentrated bet on a small number of people. An audit competition attracts hundreds of security researchers. These are some of the best hackers, people who have found real vulnerabilities in major protocols. These hundreds of researchers are now armed with AI tools. They understand codebases faster. They write PoCs faster. They find bugs that would have taken DAYS in just hours. Think about what that means. You are not just getting hundreds of humans. You are getting hundreds of AI-augmented humans, each running their own workflow, each with their own intuition about where bugs hide. The scaling dynamics are extraordinary. The firms moving away from competitions are optimizing for predictable revenue, not for their clients’ best outcomes. That is understandable from a business perspective. But if you are a project choosing where to spend your security budget, you should optimize for bugs found per dollar spent. Audit competitions now also have scaling pots. The prize pool grows with the scope of the codebase. This aligns incentives in a way that fixed-fee engagements never can. But what about AI spam, low-quality submissions, and the time it takes to triage all of those submissions? Immunefi is addressing these with mechanisms like pay-to-submit, managed triage, and AI triaging agents, which are already showing very strong promise. The best security strategy is not either or. But if you have a limited budget and you want the most eyes, the most diverse skill sets, and the best cost per finding ratio, audit competitions are still the obvious choice.














