Venkat

306 posts

Venkat banner
Venkat

Venkat

@Whit3f4ng_

Design @quillaudits_ai | Wabi Sabi

Chennai, India Katılım Mayıs 2024
475 Takip Edilen43 Takipçiler
Venkat retweetledi
QuillAudits
QuillAudits@QuillAudits_AI·
No DeFi sector was safe in Q1 2026. $160M+ gone The top 3 sectors lost $80.3M, that's 50% of all Q1 losses. Each from a single hack. One hack killed Step Finance entirely. One hack crashed Resolv stablecoin 97%. One hack drained Truebit 5-year-old contract for $26.4M. Meanwhile Borrowing & Lending got hit 4 separate times — YieldBlox, Venus, Moonwell, Aave.
QuillAudits tweet media
English
1
9
20
729
Venkat
Venkat@Whit3f4ng_·
To 8 years of amazingness!
QuillAudits@QuillAudits_AI

Today marks 8 years of QuillAudits. Most Web3 security firms didn't exist 8 years ago. Most won't exist 8 years from now. We've built through 3 bear markets, 2 exploit waves, and the full evolution of smart contract attacks from simple reentrancy to cross-protocol economic exploits. 1,500+ protocols. $3B+ protected. The biggest lesson from 8 years and 1,500+ engagements : One team, one method, one pass doesn't cut it when you're protecting hundreds of millions in user funds. So we rebuilt the model. Multi-Layer Audit → four independent security layers, delivered in the same timeline as a traditional audit: > Senior auditors who've collectively reviewed 1,500+ protocols > AI security agents trained on 5,000+ real exploits since 2017 > Independent bug bounty through curated security researchers > Continuous monitoring, because threats don't stop at deployment 4 layers. Each one catches what the others miss. Web3 has a $100T addressable market if institutions show up. They won't show up until security is embedded in every layer, every transaction, every deployment, the way HTTPS is embedded in the internet. That's the problem worth solving for the next 8 years. QuillAudits built the foundation, QuillShield is the next chapter — an AI security agent that brings what we learned from 1,500+ manual audits into every developer's workflow, before code ever hits mainnet. 8 years in. Still early.

English
0
0
4
107
Venkat retweetledi
QuillAudits
QuillAudits@QuillAudits_AI·
Today marks 8 years of QuillAudits. Most Web3 security firms didn't exist 8 years ago. Most won't exist 8 years from now. We've built through 3 bear markets, 2 exploit waves, and the full evolution of smart contract attacks from simple reentrancy to cross-protocol economic exploits. 1,500+ protocols. $3B+ protected. The biggest lesson from 8 years and 1,500+ engagements : One team, one method, one pass doesn't cut it when you're protecting hundreds of millions in user funds. So we rebuilt the model. Multi-Layer Audit → four independent security layers, delivered in the same timeline as a traditional audit: > Senior auditors who've collectively reviewed 1,500+ protocols > AI security agents trained on 5,000+ real exploits since 2017 > Independent bug bounty through curated security researchers > Continuous monitoring, because threats don't stop at deployment 4 layers. Each one catches what the others miss. Web3 has a $100T addressable market if institutions show up. They won't show up until security is embedded in every layer, every transaction, every deployment, the way HTTPS is embedded in the internet. That's the problem worth solving for the next 8 years. QuillAudits built the foundation, QuillShield is the next chapter — an AI security agent that brings what we learned from 1,500+ manual audits into every developer's workflow, before code ever hits mainnet. 8 years in. Still early.
English
17
23
56
10.9K
Venkat retweetledi
WachAI
WachAI@Wach_AI·
OpenAI's EVM security benchmarks are a huge collective win for the industry. AI Agents allow hackers to be far more offensive in their strategies, and it's about time that we ramp up our defenses to match the same. As part of our initiative towards smart contract security, we launched our audit agent Sentry on @virtuals_io. Sentry has over 300 built-in detectors for commonly occurring vulnerabilities, and has specific audit strategies for commonly occurring DeFi standards like ERC-20, ERC-721 and ERC-4626. Sentry is powered by @QuillAudits_AI's Shield AI, where the current research direction is to use agents towards offense. Agents are still far from the levels of a human auditor, but we're getting one step closer everyday. Today, they're still very useful for auditors who can narrow down their search for vulnerabilities based on initial risks discovered by our agent.
WachAI tweet media
OpenAI@OpenAI

Introducing EVMbench—a new benchmark that measures how well AI agents can detect, exploit, and patch high-severity smart contract vulnerabilities. openai.com/index/introduc…

English
4
16
46
2.6K
Venkat retweetledi
Preetam | QuillAudits 🥷
Preetam | QuillAudits 🥷@raopreetam_·
2026 reality: AI catches code bugs in seconds ( EVMbench ), but business logic, upgrades & OpSec still drain billions. @QuillAudits_AI is combining human expertise, AI tooling, and full governance reviews. Safer protocols win.
English
3
3
39
1.3K
Venkat retweetledi
QuillAudits
QuillAudits@QuillAudits_AI·
What if Claude could audit smart contracts like a senior security researcher? We just open-sourced Claude Skills for Smart Contract Security Audits (v0.1). - 10 open-source skills - OWASP 9/10 coverage. Not pattern matching - actual exploit reasoning, invariant detection and adversarial simulation.
English
6
8
43
2.8K
Venkat retweetledi
QuillAudits
QuillAudits@QuillAudits_AI·
Dropping Claude Skills to speed up smart contract audits with structured AI workflows. 10 open-source Claude Skills that turn AI into a reasoning-driven audit companion: → Reentrancy Detector → Access Control Mapper → Oracle Risk Scout → Upgradeability Checker → MEV Pattern Watcher → Invariant Generator
QuillAudits tweet media
English
10
26
149
15.2K
Venkat retweetledi
Rahul Saxena
Rahul Saxena@saxenism·
Several public claims have been made about this disclosure. The factual record differs. Correcting once, with citations. For context: findings were submitted on January 6. A bounty expectation was explicitly stated on Day 1. No discussion of bounty amount occurred until the team unilaterally posted a $2,500 Snapshot proposal on February 2. When we responded with our assessment of what the findings were worth and explicitly stated we were open to negotiation, the response from dStack's co-founder, verbatim: "Can I take it as a threatening to us? Sorry then we don't need to continue this conversation. Feel free to publish everything in this group." ------ Onto the public claims made: "Dude ask us a bounty $100000" The message sent to the dStack team contained a stated position referencing industry comparables, specifically Oasis Protocol's $100K Critical ceiling on Immunefi for equivalent TEE infrastructure. The $100K figure was a valuation for Findings #4 and #7, which together form the compound attestation bypass, reflecting their trust-root impact and platform-wide blast radius. For the remaining findings, the message stated: "For the remaining confirmed issues, including the two High findings and the two Medium findings, I am open to discussing a consolidated bounty." It closed with: "I would prefer to resolve this constructively and privately." That is a negotiation position with an explicit invitation to discuss. No counter-offer was made. No revised amount was proposed. No mention of what the protocol could and could not afford. "He threaten us to make PR crisis" The message explicitly separated publication from bounty: "Publication proceeds regardless of the bounty outcome. These are separate tracks." A disclosure timeline was provided. The message stated more than once a preference for private resolution, including closing with: "I would prefer to resolve this constructively and privately." The co-founder's response is quoted above. We mentioned our writeup would go out on Wednesday, and that is exactly when it was published. "Fixed in a week" Jan 6 to Feb 10. One month. Telegram group, shared Notion pages, multiple PDFs exchanged, severity rebuttals, fix reviews, and a second researcher added for validation. dstack's own blog post timeline reads "Jan–Feb 2026." A Critical vulnerability present since the library's first commit does not become less Critical because the patch was fast. "Most of them are AI slops" 7 findings were submitted to the team, and 6 were accepted by them. Code fixes were committed for each one. CVE-2026-22696 was published as Critical by dStack's own lead developer on GitHub Security Advisories. The description reads: "bypasses the entire remote attestation security model." Either the findings were valid and required fixes, or they were not. The commit history, the CVE, and the GHSA reflect the former. "We paid $100k to security researchers in 2025" and "We are not able to afford that" Both statements were posted publicly within hours of each other. ------ What remains unaddressed: 1. Why do severity classifications differ across the Snapshot proposal, the GHSA, and the blog post? 2. Why was the shared Notion page that documented mutually agreed severity classifications cleared after Feb 8? 3. If the verifier omitted required verification steps since inception, and users relied on that verifier for hardware trust decisions, how is 'no action required' the correct conclusion in your blog? ------ Every factual claim in this thread and this response is supported by Telegram logs, shared documents, and screenshots. Private communications have not been published. That is a choice, not a limitation. We stand by the findings and the disclosure process. We will not be engaging further on characterisations. The technical record speaks for itself.
Rahul Saxena tweet mediaRahul Saxena tweet mediaRahul Saxena tweet media
Rahul Saxena@saxenism

Compromised and revoked TEE machines could pass dstack's attestation verification as perfectly valid, due to missing checks. What's more? This gap has existed since the library's first commit. @PhalaNetwork Cloud and every protocol built on it inherited this behaviour from day one. Their GHSA marks this as Critical and notes that it "bypasses entire remote attestation model". My team at @bluethroat_labs reported this and 5 other vulnerabilities, and this is the response we got: + $2,500 in bounty offered + disclosure timelines framed as "threat" + wiped shared Notion + severities downgraded in a public blog post Here's the full story: 🧵👇🏻

English
6
2
44
7.2K
Venkat
Venkat@Whit3f4ng_·
This is huge! 🔥
Rahul Saxena@saxenism

Compromised and revoked TEE machines could pass dstack's attestation verification as perfectly valid, due to missing checks. What's more? This gap has existed since the library's first commit. @PhalaNetwork Cloud and every protocol built on it inherited this behaviour from day one. Their GHSA marks this as Critical and notes that it "bypasses entire remote attestation model". My team at @bluethroat_labs reported this and 5 other vulnerabilities, and this is the response we got: + $2,500 in bounty offered + disclosure timelines framed as "threat" + wiped shared Notion + severities downgraded in a public blog post Here's the full story: 🧵👇🏻

English
0
1
3
814
Venkat retweetledi
Venkat retweetledi
QuillAudits
QuillAudits@QuillAudits_AI·
In 2025, private key compromises via phishing and social engineering emerged as one of the most prevalent attack vectors, costing $1.75B. And early signs suggest 2026 is no safer from these Web2 style threats.
QuillAudits tweet media
English
2
4
15
871
Venkat retweetledi
WachAI
WachAI@Wach_AI·
WachAI Mandates just went live on ClawHub 🦞 OpenClaw agents can now lock deterministic agreements between each other using WachAI's Mandates. Mandates enable task-validation between agents which eventually helps in building reputation. This helps Moltbook agents in trusting each other. We just got one-step closer to verification. clawhub.ai/Akshat-Mishra1…
English
7
15
56
3.8K
Venkat retweetledi
QuillAudits
QuillAudits@QuillAudits_AI·
Pumped-up to partner with @XFounders_camp team!🤝 We’ve seen firsthand how early, rigorous security changes founder outcomes from Bali Bootcamp to now. In 2026, we’re doubling down with XFounders and the Starknet Foundation as a Security Partner across upcoming bootcamps and the $25K Pitch & Raise Challenge. Founders in the ecosystem: if you’re preparing to launch, scale or raise - security starts before mainnet. We’ve got you covered.
QuillAudits tweet media
English
2
3
10
692
Venkat
Venkat@Whit3f4ng_·
@shanmu_s4 yes you can. Select both the shader layers and move the slider. I screen recorded it while doing so.
English
1
0
0
8
Shanmu S4
Shanmu S4@shanmu_s4·
@Whit3f4ng_ Oh got it thanks!! But how did you animate this? You cannot move sliders for 2 different shaders right?
English
1
0
0
17
Venkat retweetledi
Rahul Saxena
Rahul Saxena@saxenism·
Let me get into the weeds here. Here's a realistic picture of what I really meant when I said the verification **burden** has been shifted. Context: In modern SGX/TDX TEE protocols, the PROVER sends: + attestation evidence (quote/report) + verification collateral needed to validate it OR The protocols rely on a 3rd party and NOT Intel as a source of truth for their collateral. Now, the prover itself sending the materials required for verification (collateral) to the verifier OR some 3rd party supplying critical decision-making artifacts is NOT as problematic as it seems on the surface. But we'll have to wade through some mud to get this right. Let me tell you why. This design choice is largely a practicality and availability tradeoff. Many verifiers DO NOT want to depend on live Intel network calls on every handshake. Fair. So what design choices do people typically zero in on: 1. Verifier fetches directly from Intel PCS to pull PCK cert chain, PCK CRLs, TCBInfo, QEIdentity, etc. Good: Freshness stonk, Rollback risk is negligible Bad: Verifier needs outbound network + Intel availability/latency issues 2. Verifier fetches from a trusted cache (PCCS) Here we have a Provisioning Certificate Caching Service that syncs with Intel PCS and is considered the source of truth for verifiers (instead of Intel) Good: Some freshness, controlled trust boundary, low latency Bad: Security in and around PCCS 3. Prover supplies collateral (offline-ish workflow) Prover bundles collateral with the quote, and the verifier validates it locally. Good: Verifier can work even with extreme restrictions Bad: Rollback and Staleness galore. ------ Now, what kind of architecture should a protocol choose? Completely dependent on what their promised security guarantees are, HOW IMPORTANT every correct attestation check is for their clients, and what throughput they are fine with. ---- Let's talk about the most common 2nd case of protocols relying on PCCS instead of talking to Intel. 1. PCCS is usually run by whoever operates the verifier-side infra. 2. So, protocols that want minimal trust in third parties run their own PCCS. 3. However, a large number of teams rely on a PCCS endpoint provided by Cloud or managed SGX platforms. Now, depending on a 3rd party for critical artifacts that determine whether an attestation would be considered valid or not, sounds wild. But there are some cryptographic guarantees at play here. 1. A sane verifier should fail-closed on missing, malformed or anyhow suspicious collateral. If the signature chain can't be built. REJECT. 2. All important collateral is (or should ideally be) cryptographically authenticated. 3. Basically, if PCCS "changes entries", they CANNOT forge Intel signatures. 4. The damage that PCCS (which is a cache and should not be treated as a trust anchor for integrity) can do is: + withhold entries + serve stale but once-valid entries + serve garbage/malformed entries ---- But let's say these guarantees are NOT good enough for you, and you want to push towards as much trustlessness as possible while building out a TEE protocol in our good ol' web3 industry. So, you say that we won't depend on a 3rd party for our PCCS and we will build out our own. Now let's see just how difficult would that be to get "absolutely right". > If your verifier is going to fetch collateral from PCCS, then PCCS should behave like a freshness gate and not a dumb mirror. You absolutely must ensure: + PCCS periodically syncs from Intel PCS + It refuses to serve collateral that is expired + It keeps only the latest version for an FMSPC/TCBInfo/QEIdentity and won't serve older versions + It can pin a minimum acceptable `tcbEvaluationDataNumber` / “issue date” for TCBInfo/QEIdentity. + PCCS logs + alarms when it cannot refresh in time. + and other things based on your protocol's specifics And if you don't want to implement them in your PCCS, you would have to implement these checks in your verifier locally. But again, defence-in-depth is the name of the game. --- The strongest security mindset I can leave you with in this subject is to: > Treat prover-supplied collateral as a hint, not truth. Always trust your own, thorough validation over any promises coming in from anyone. Your mitigation checklist for your collateral supplier's misadventures should include: + Failing closed on missing/invalid collateral + Verifying signatures/chains for all trust objects + Enforcing freshness (time, expiry and monotonicity) + Caching the newest seen and rejecting regressions ------ With that, I'll say thank you and all the best for your TEE adventures. And, if you need any help around trust anchors, your protocol policies and how to deal with TEEs to build a robust protocol, @bluethroat_labs would be happy to help you.
Rahul Saxena tweet media
Rahul Saxena@saxenism

Intel removed itself as the bottleneck in online verification paths when it started moving away from (EPID quotes + IAS verification) to (DCAP collateral + local verification). It gave data centre customers more control of attestation, but also put the burden of correct verification on verifiers. Now, the correctness of attestation is only as good as the verifier implementation. I wonder if that was the right move. ---- For context, these are the EPID and DCAP schemes:

English
0
3
8
884
Venkat retweetledi
Rahul Saxena
Rahul Saxena@saxenism·
Intel removed itself as the bottleneck in online verification paths when it started moving away from (EPID quotes + IAS verification) to (DCAP collateral + local verification). It gave data centre customers more control of attestation, but also put the burden of correct verification on verifiers. Now, the correctness of attestation is only as good as the verifier implementation. I wonder if that was the right move. ---- For context, these are the EPID and DCAP schemes:
Rahul Saxena tweet mediaRahul Saxena tweet media
English
1
3
16
1.5K
Venkat retweetledi
Bluethroat Labs
Bluethroat Labs@bluethroat_labs·
1/ So you don’t trust your own computer… You’re running code on a machine where the OS, hypervisor, and even the cloud provider might be compromised. You still want to keep your keys, models, or data secret. That’s the problem Trusted Execution Environments (TEEs) try to solve.
English
1
4
14
1.4K
Venkat retweetledi
Rahul Saxena
Rahul Saxena@saxenism·
Always wanted to get into TEEs but somehow never got to it? Here's an amazing chance from @bluethroat_labs to become a TEE wizard with the minimal effective dose. Let's see who can answer the question at the end of this thread with the correct rationale. All the best XD
Bluethroat Labs@bluethroat_labs

1/ So you don’t trust your own computer… You’re running code on a machine where the OS, hypervisor, and even the cloud provider might be compromised. You still want to keep your keys, models, or data secret. That’s the problem Trusted Execution Environments (TEEs) try to solve.

English
0
2
9
802
Venkat
Venkat@Whit3f4ng_·
@shanmu_s4 Hi! I used two Halftone shaders on top of each other. One on top would have a normal blending mode with classic dots. The bottom one has a screen blending with a soft dots. Also more overlay in terms of grain on the second compared to first. Hope this helps!
English
2
0
0
24
Shanmu S4
Shanmu S4@shanmu_s4·
@Whit3f4ng_ Yo you mind telling me how you got the glow/blur effect?
English
1
0
0
20