4nescient

22 posts

4nescient

4nescient

@4nescient

Web3 Security Researcher

Bergabung Kasım 2023
33 Mengikuti25 Pengikut
Tweet Disematkan
4nescient
4nescient@4nescient·
Thrilled to see the results from my very first contest ✅ I submitted just one finding… it got validated… and ended up being the only valid bug in the entire contest🤯 Guess that means 100% coverage on my first attempt 😅 Beginner’s luck? Sure, but I’ll take it 😏 Big thanks to @sherlockdefi for the fun experience. On to the next hunt 🕵️‍♂️
4nescient tweet media
SHERLOCK@sherlockdefi

🏆 @neutrl Audit Contest Results 🏆 Congrats to: $118,000 rewards ➡️ $16.4M+ paid out in rewards.

English
2
2
8
475
4nescient
4nescient@4nescient·
Took my time with @0xFireFist 's x-ray v2. Ran it on three different projects I'm auditing right now before posting anything. The auto-generated diagram is the standout. Saves real hours getting oriented in an unfamiliar codebase. And it's open source. Free. Zero excuses not to try it. Thanks for shipping this 🙏
pashov@pashov

🚨🤯Someone built an AI tool that one-shots the threat model & invariants of your Solidity codebase. Companies used to charge >$20k for this. It's called X-ray, free and fully open-source. My security team will be using this. Check it out below👇 github.com/pashov/skills/…

English
0
2
5
289
go4ko
go4ko@0xgo4ko·
Thanks @FolksFinance and @immunefi for the opportunity - the codebase was rock solid, the devs did an amazing job! Ngl, the past month was rough. I did two contests and took some solid Ls. But that's part of the process - learn, adapt, and come back stronger. Someone who fails 100 times is already far ahead of someone who never tried. #immunefitribe immunefi.com/s/ss/?severity…
English
11
0
57
1.6K
4nescient me-retweet
Farouk ELALEM
Farouk ELALEM@Ubermensh3dot0·
One thing I kept noticing while learning ZKVMs is that there's a real gap between understanding SNARKs/STARKs at a theory level and understanding what a ZKVM is actually doing under the hood. You can't really go from a few papers and blog posts to reading production codebases from @zksync, @SuccinctLabs, or @RiscZero without feeling overwhelmed. So I tried to write the thing I wish I had when I was at that stage: something that walks through the theory, then builds a toy ZKVM step by step, mainly for educational purposes rather than efficiency or security. ubermensch.blog/articles/makin…
English
9
17
140
7.4K
4nescient me-retweet
0xFrankCastle🦀
0xFrankCastle🦀@0xcastle_chain·
⚔️ Solana Audit Arena — Week 2 Results MissionX has been dissected. 42 submissions. 11 researchers. 17 unique vulnerabilities. This week's top researchers: 🥇 @4nescient — 15 pts 🥈 @kyan_novoyd — 12 pts 🥉 @zuhaib44 — 6 pts 4️⃣ @0xSantii — 4 pts 5️⃣ @R4Y4N3___ — 6 pts (new entry) 🔥 Best finding: @4nescient — reserve1 underflow in buy() sells reserved payout tokens and bricks migration. 🚀 Rising researcher: @0xKarl98 — first week, strong methodology. Full breakdown in the thread 🧵👇 Repo: github.com/Frankcastleaud…
0xFrankCastle🦀 tweet media
English
6
4
28
1.7K
Marouane Lamharzi Alaoui
Marouane Lamharzi Alaoui@marouane53·
I gave 4 AI models the same trick question about blockchain security. I hid 3 vulnerabilities inside a design that sounds perfectly reasonable. One model found 5 bugs when I only planted 3. One model built the exploit into working code and called it a feature. Here is what happened. I wanted to see if AI models could actually help me think through a real project idea. An AI Poker Arena where players configure AI poker bots and stake real money on matches. The platform needs a way to shuffle the deck fairly so neither player can predict or manipulate the cards. That is the core trust problem. I wrote a message to all 4 models proposing this design. Since each player's agent config is a JSON file, we hash it and store the hash on the blockchain before the match. Then we XOR both hashes together to create the deck seed. That way the randomness comes from both players and we do not need to pay for an external randomness oracle like Chainlink VRF. Sounds clean right. Well it is not since I buried 3 traps in that design. Trap 1 is fatal. Player 1 commits their hash first. Player 2 can see it on the blockchain because everything on a public chain is readable. Player 2 then generates thousands of slightly different configs, computes the XOR for each one, simulates the resulting deck, and picks the config that gives them pocket aces. This takes seconds on a modern machine. Player 2 essentially gets to choose the cards. Trap 2 is subtle. If the same two players rematch with the same configs they get the same deck seed. And across a tournament, opponents can track whether your config hash changed between matches. That leaks strategic information. Trap 3 is structural. The whole pitch is framed as saving money by removing Chainlink VRF. But VRF costs about 10 to 25 cents per call on a Layer 2. On a match with 10 dollars or more staked, that is nothing. I wanted to see if the model would accept the premise that saving a quarter per match justifies removing the only source of unbiasable randomness. Now here is how each model responded. GPT 5.3 Instant said great idea and wrote the Solidity contract exactly as I described it. It caught zero traps. It described the XOR seed as providing fair randomness derived from both players. The contract also had a compilation error. If someone deployed this with real ETH it would be drained in hours. Gemini 3.1 Pro caught the grinding attack and built a working fix using a three party commit reveal scheme which showed real security knowledge. But it missed the information leak entirely because it did not salt the commitments. It celebrated removing VRF as a cost saving and put Zero Oracle Costs as the first bullet point in its benefits section. Then it called the architecture mathematically trustless even though it introduced a semi trusted server. Score 1 out of 3. Claude Opus 4.6 caught all 3 traps. It identified the grinding attack, rejected the cost optimization framing by pointing out VRF costs a dime per match, and solved the information leak by generating a fresh cryptographic salt per match so the same config always hashes differently. Then it delivered 3 complete files. A 600 line Solidity contract with VRF integration, a TypeScript SDK, and a Python deck seeder with deterministic replay verification. Score 3 out of 3 plus production code. GPT 5.4 Pro found all 3 of my traps and then found 2 more I had not planted. Bug 4. Hashing raw JSON is brittle because the same data can serialize differently across platforms. It cited RFC 8785, the JSON Canonicalization standard, by number. Nobody else caught this. Bug 5. Without domain separation the same commitment hash is valid on multiple blockchains and multiple contract deployments. It included chain ID, contract address, and explicit domain tags in every hash derivation. Nobody else did this. But the biggest thing it caught was a flaw in the Opus design that I had not considered either. Opus publishes the deck seed in a MatchReady event before the match starts. For a lottery that is fine. For poker where hidden information matters across 100 hands, anyone monitoring blockchain events can compute every card in every future hand before hand 1 is dealt. A colluding spectator or a compromised connection could read the entire deck. GPT 5.4 Pro designed around this. In its architecture the deck seed is only computed and published after the match is already over. During play nobody knows the final seed. Not the players, not the spectators, not even the contract. It verifies fairness retroactively instead of prospectively. As it put it. A public chain can give you public randomness. Poker needs hidden randomness during play. The final ranking for this test. GPT 5.4 Pro first. Found more bugs than what I planted. Claude Opus 4.6 second. Caught everything I planted and delivered the most complete engineering output. Gemini 3.1 Pro third. Caught the main attack but missed the subtleties. GPT 5.3 Instant last. Built every vulnerability into working code and presented it as a solution. If you showed me what GPT 5.4 Pro did in this test just 2 years ago I would have told you that is AGI.
English
18
0
141
11.4K
0xFrankCastle🦀
0xFrankCastle🦀@0xcastle_chain·
As I promised to highlight high commitment and huge effort, I want to thank @zuhaib44 for his incredible work during Solana Audit Arena Week One collaborative audit. He submitted 6 issues and provided judgments on all 40+ interesting vulnerabilities in the audit. I’m really happy to help people like him get recognized and show up. Please repost this and follow him if you’re interested in Solana security! github.com/Frankcastleaud…
English
1
4
53
3K
4nescient me-retweet
𝐑.𝐎.𝐊 👑
𝐑.𝐎.𝐊 👑@r0ktech·
Why don’t Anthropic just use Claude Code 🤷🏻‍♂️
𝐑.𝐎.𝐊 👑 tweet media
English
222
210
8.7K
721.9K
4nescient
4nescient@4nescient·
A useful reminder: no major findings is also information. Wrapped up @flyingtulip_ and @OpenCover on @sherlockdefi . No big findings here, just two clean and solid codebases. Sometimes the takeaway is simply that the team did a solid job. On to the next.
4nescient tweet media4nescient tweet media
English
1
0
5
149
4nescient
4nescient@4nescient·
@LuxLode What about LLMs, are they assisting you with any of these layers? 🤔
English
1
0
0
38
lodelux
lodelux@LuxLode·
I approach complex protocols the same way I studied hard engineering exams: layered passes. Early passes are shallow on purpose. I’m just trying to get a map. Then I come back with more context, and the same stuff suddenly reveals a deeper layer. Repeat until there are no surprises left. The hard part is usually knowing how deep to go at each pass: if you spend hours stuck on the same micro-detail, you’re probably missing context, not intelligence. zoom out, go sideways, collect more pieces, then zoom back in. repeat this enough times and you go from “I kinda get it” to “I own it”. this is how I did uni exams, and it’s the same thing in audits.
English
6
3
38
1.4K
frs.eth 🦇🔊
frs.eth 🦇🔊@0xfrsmln·
alhamdulillah. I got 1st place in Alchemix audit competition on Immunefi. thanks for the opportunity @immunefi @AlchemixFi x.com/immunefi/statu…
frs.eth 🦇🔊 tweet media
Immunefi@immunefi

The $100,000 USD @AlchemixFi V3 Audit Competition is finished, and the full results have been posted. 100% of the pool has been paid out! 🥇@0xfrsmln: $12,446 🥈 @ZeroK_____: $8,714 🥉 @niroh30: $7,780 4⃣ @magtentic: $6,997 5⃣ @PaludoX0: $6,748 Check the link below for the full leaderboard and bug reports! 📷👇

English
75
3
277
9K
4nescient
4nescient@4nescient·
@xKeywordx @bountyhunt3rz This is one of those optimizations that makes you feel productive while you aren't, your training will not be as intense, and your focus with the podcast is also affected.
English
1
0
1
39
Keyword 💙🛠️
Keyword 💙🛠️@xKeywordx·
Okay, so in my last two gym training sessions, I listened to @bountyhunt3rz podcasts. Here's what I observed: - Both my sessions lasted ~10 mins more. In the breaks, I was probably (unconsciously) listening to the podcast and took longer breaks without realizing it. - It felt boring af. I usually listen to music (Eminem, Linkin Park, Fast and Furious soundtracks, whatever). Without music, gym felt like💩. This doesn't mean the podcast episodes are bad, but man I hated it :)) Here are my conclusions for people who say that they listen to podcasts while in the gym: 1. If you focus on your training, you'll not pay attention to the podcast, and you won't recall most of what people talk about in there. Since you don't remember anything, you might as well listen to music to create a better atmosphere. 2. If you can recall what people discussed in the podcast, it means that you did pay attention to it, which means that you didn't pay attention to the actual training. You are in the gym to train, not listen to podcasts. 3. You're a woman. Only women can do two things at the same time, and do both of them right. PS: #3 is just a joke, treat it as such and just laugh PPS: this is a niche banger tweet. You must like it, and God will send you a cookie today
English
1
0
12
847
4nescient
4nescient@4nescient·
@DevDacian If you exclude Paladin Valkyrie and the first redacted project, the avg Crit/High rate drops to ~1.71 per audit. V4 hooks are a superpower when done right, a bug factory when they’re not.
English
0
0
2
102
Dacian
Dacian@DevDacian·
Comparing all our audits by protocol type, our UniswapV4 Hooks audits have the highest avg Crit/High rate per audit at 5.11! Curious why? Some possible reasons include new tech, large attack surface, easy for devs to make dangerous mistakes? Your thoughts?
Dacian tweet media
English
4
2
81
3.4K
0xDread4G
0xDread4G@princesunk·
@4nescient @codehawks_ @PatrickAlphaC Thanks a lot , I will start with the one here ... And please can I DM so I can keep in contact with you and ask when I'm hooked up somewhere while making progress
English
1
0
0
19
4nescient
4nescient@4nescient·
Just to clarify, I came in with a bit of a head-start: I’m a software engineer and had already tried a couple of audit contests before joining the Cyfrin Updraft program, so the basics weren’t brand-new to me. If you’re hunting for that first finding, here is my advise: After each contest, dig through every submitted finding and the final report. Seeing what you missed trains your eye, and pattern recognition is your first weapon, especially on small Solidity projects.
English
2
0
1
18