
Constantine | dRPC.ORG
2K posts

Constantine | dRPC.ORG
@constantine_rm
CEO at @dRPCorg - The most performant & reliable Web3 infrastructure
Worldwide Katılım Ekim 2017
243 Takip Edilen9.7K Takipçiler
Sabitlenmiş Tweet

In dRPC you can run a quorum of data providers, including internal nodes, with custom rules for quorum. We made it in 2023: #why-use-verification" target="_blank" rel="nofollow noopener">drpc.org/docs/gettingst…. For a mission-critical application like a bridge or oracle, there's no excuse not to set it up. But they didn’t.
The framing of the recent KelpDAO and LayerZero incidents as some novel attack vector, or the work of meaningfully smarter attackers, is mostly wrong. The actual failure mode - applications trusting a single RPC endpoint to return honest data - has been discussed openly for years, by @VitalikButerin, @lomashuk, @MicahZoltu, @wagmiAlexander, @ChainLinkGod, @banteg, and many others. It is neither new nor subtle. A closely related failure happened in 2022 with the Ankr DNS hijack on Polygon and Fantom: x.com/Mudit__Gupta/s…
The point here isn't ideological. In a 24/7 market where automated systems act on RPC responses in real time, assuming one provider will always return correct data is a system-level risk. There is no T+2 window in which a human notices the error and reverses it.
When we launched dRPC, cross-verification across a permissioned set of RPC providers was the core idea. The original repo and docs are still up (although outdated since then):
-#why-use-verification" target="_blank" rel="nofollow noopener">drpc.org/docs/gettingst…
- github.com/drpcorg/drpc-s…
We used a simple quorum rather than zk-based verification, partly to test real demand before overbuilding. Two observations from that period:
1. The demand was not there. In public, everyone agreed with the thesis. In private, the responses were "we are not ready to pay more for quorum," or "yes, we could apply it to sensitive paths only, but it's not a priority."
2. The risk was real. The market is now discovering this at a cost of roughly $250M.
Because full cross-verification on every request is overkill for most workloads, we eventually shifted toward shadow checks — randomized background comparisons across providers that detect and eject unhealthy nodes before they serve meaningful traffic. This is a reasonable compromise for general workloads. It is not a substitute for quorum on sensitive paths.
So the practical rule, for anyone building infrastructure whose failure mode is user funds:
1. Use at least 3–5 independent, reliable RPC providers.
2. Do not build your load balancer on training wheels. Something like drpc.org/nodecore-open-… is open source, free, and almost certainly better than what you would build in-house. Contributing to it is a better use of time than reinventing it.
You cannot defend against every possible attack. But this particular class is avoidable at low cost, if you are willing to treat RPC as a system-level dependency rather than a commodity input. That is a reasonable bar for anything meant to serve more than a narrow circle of users.
We will update the dRPC NodeCore (drpc.org/nodecore-open-…) with strict rules for quorum on your side in the near future, stay tuned. If you have more sophisticated requirements for security, we are fully open for your requests - feel free to each me our via DM here or by email kz@drpc.org
LayerZero@LayerZero_Core
English


1. Yes, because it’s just default lb, where your local nodes always better and prioritized.
2. If someone catch you with gun on the street, your 16 symbols password will not help you to save money, if guy with gun know about them - correct. But LZ told that balancer was not hacked, only RPCs ;)
2.1. What is “popular”? dRPC is popular, we serve majority of well-known web3 projects. If you ask about Alchemy and QN particularly, because only those 2 more “popular” now - we don’t have them in pool, so currently you can’t use them for such quorum. But I believe it’s good momentum in time to discuss this with them as well. Eventually it’s not about competition, but collaboration for common good.
Btw, write me in DM, always happy to speak with fans 🫶
English

1. agree that more nodes would have been better, but does that invalidate their approach/architecture?
2. Can you share more, if the machine that runs the signature verification gets hacked (like in this case), this doesn't help you no? Btw, can you share more here? I was not aware that popular rpc node providers provide signatures of their reponses
3. agree, that's what i mean by pathological
English

In dRPC you can run a quorum of data providers, including internal nodes, with custom rules for quorum. We made it in 2023: #why-use-verification" target="_blank" rel="nofollow noopener">drpc.org/docs/gettingst…. For a mission-critical application like a bridge or oracle, there's no excuse not to set it up. But they didn’t.
The framing of the recent KelpDAO and LayerZero incidents as some novel attack vector, or the work of meaningfully smarter attackers, is mostly wrong. The actual failure mode - applications trusting a single RPC endpoint to return honest data - has been discussed openly for years, by @VitalikButerin, @lomashuk, @MicahZoltu, @wagmiAlexander, @ChainLinkGod, @banteg, and many others. It is neither new nor subtle. A closely related failure happened in 2022 with the Ankr DNS hijack on Polygon and Fantom: x.com/Mudit__Gupta/s…
The point here isn't ideological. In a 24/7 market where automated systems act on RPC responses in real time, assuming one provider will always return correct data is a system-level risk. There is no T+2 window in which a human notices the error and reverses it.
When we launched dRPC, cross-verification across a permissioned set of RPC providers was the core idea. The original repo and docs are still up (although outdated since then):
-#why-use-verification" target="_blank" rel="nofollow noopener">drpc.org/docs/gettingst…
- github.com/drpcorg/drpc-s…
We used a simple quorum rather than zk-based verification, partly to test real demand before overbuilding. Two observations from that period:
1. The demand was not there. In public, everyone agreed with the thesis. In private, the responses were "we are not ready to pay more for quorum," or "yes, we could apply it to sensitive paths only, but it's not a priority."
2. The risk was real. The market is now discovering this at a cost of roughly $250M.
Because full cross-verification on every request is overkill for most workloads, we eventually shifted toward shadow checks — randomized background comparisons across providers that detect and eject unhealthy nodes before they serve meaningful traffic. This is a reasonable compromise for general workloads. It is not a substitute for quorum on sensitive paths.
So the practical rule, for anyone building infrastructure whose failure mode is user funds:
1. Use at least 3–5 independent, reliable RPC providers.
2. Do not build your load balancer on training wheels. Something like drpc.org/nodecore-open-… is open source, free, and almost certainly better than what you would build in-house. Contributing to it is a better use of time than reinventing it.
You cannot defend against every possible attack. But this particular class is avoidable at low cost, if you are willing to treat RPC as a system-level dependency rather than a commodity input. That is a reasonable bar for anything meant to serve more than a narrow circle of users.
We will update the dRPC NodeCore (drpc.org/nodecore-open-…) with strict rules for quorum on your side in the near future, stay tuned. If you have more sophisticated requirements for security, we are fully open for your requests - feel free to each me our via DM here or by email kz@drpc.org
LayerZero@LayerZero_Core
English

We got a lot of requests to bring this back to life, and as promised, it's live now! #nodecore" target="_blank" rel="nofollow noopener">drpc.org/docs/gettingst…
If you build a mission-critical dApp, or if part of your functionality is super fragile to RPC poisoning, please use the Verification feature from dRPC via NodeCloud or NodeCore; there is no excuse not to use it, and you can't say, after yet another hack, that you were not aware of this.
Constantine | dRPC.ORG@constantine_rm
In dRPC you can run a quorum of data providers, including internal nodes, with custom rules for quorum. We made it in 2023: #why-use-verification" target="_blank" rel="nofollow noopener">drpc.org/docs/gettingst…
. For a mission-critical application like a bridge or oracle, there's no excuse not to set it up. But they didn’t. The framing of the recent KelpDAO and LayerZero incidents as some novel attack vector, or the work of meaningfully smarter attackers, is mostly wrong. The actual failure mode - applications trusting a single RPC endpoint to return honest data - has been discussed openly for years, by @VitalikButerin, @lomashuk, @MicahZoltu, @wagmiAlexander, @ChainLinkGod, @banteg, and many others. It is neither new nor subtle. A closely related failure happened in 2022 with the Ankr DNS hijack on Polygon and Fantom: x.com/Mudit__Gupta/s… The point here isn't ideological. In a 24/7 market where automated systems act on RPC responses in real time, assuming one provider will always return correct data is a system-level risk. There is no T+2 window in which a human notices the error and reverses it. When we launched dRPC, cross-verification across a permissioned set of RPC providers was the core idea. The original repo and docs are still up (although outdated since then): -#why-use-verification" target="_blank" rel="nofollow noopener">drpc.org/docs/gettingst… - github.com/drpcorg/drpc-s… We used a simple quorum rather than zk-based verification, partly to test real demand before overbuilding. Two observations from that period: 1. The demand was not there. In public, everyone agreed with the thesis. In private, the responses were "we are not ready to pay more for quorum," or "yes, we could apply it to sensitive paths only, but it's not a priority." 2. The risk was real. The market is now discovering this at a cost of roughly $250M. Because full cross-verification on every request is overkill for most workloads, we eventually shifted toward shadow checks — randomized background comparisons across providers that detect and eject unhealthy nodes before they serve meaningful traffic. This is a reasonable compromise for general workloads. It is not a substitute for quorum on sensitive paths. So the practical rule, for anyone building infrastructure whose failure mode is user funds: 1. Use at least 3–5 independent, reliable RPC providers. 2. Do not build your load balancer on training wheels. Something like drpc.org/nodecore-open-… is open source, free, and almost certainly better than what you would build in-house. Contributing to it is a better use of time than reinventing it. You cannot defend against every possible attack. But this particular class is avoidable at low cost, if you are willing to treat RPC as a system-level dependency rather than a commodity input. That is a reasonable bar for anything meant to serve more than a narrow circle of users. We will update the dRPC NodeCore (drpc.org/nodecore-open-…) with strict rules for quorum on your side in the near future, stay tuned. If you have more sophisticated requirements for security, we are fully open for your requests - feel free to each me our via DM here or by email kz@drpc.org English

It's a good question, and we can't say "use dRPC's NodeCloud or NodeCore, and you will be 100% SAFU", I'm not "that" CZ :D
But the possibility of such an attack will be much lower.
Based on their message, they used 2 self-hosted RPC (poisoned) and 1 3rd-party RPC (DDoSed). With this feature, as I mentioned, #nodecore" target="_blank" rel="nofollow noopener">drpc.org/docs/gettingst…, it will be impossible to hack.
Why:
1. not only 3 nodes, 2 of them under the control of 1 DevOps.
2. each response signed by provider by key on provider side.
3. if quorum didn't reach (let's imagine some node was poisoned or DDoSed) you will get error, not a wrong response
English

Long time fan of drpc, I'm curious how you assess the specific situation here? My understanding is that even if L0 would have used NodeCore (which it sounds like they had their own version), this compromise would have likely happened since the machine running nodecore could have just been swapped out? And then with NodeCloud, the trust of "properly running quorums" would have moved from their infra to your infra? From what I can tell, there was a pathological case with the quorum logic which excluded "down" nodes from quorum?
English

@toxzique @banteg @Quicknode "Send enough traffic" via eth_call? And block not particular requests that hit the limit, but the entire account? God bless users of this service in that case :)
English

@constantine_rm @banteg @Quicknode You don’t have to DDoS, just send enough traffic so that the legitimate app starts hitting rate limit
English

looks very similar to @quicknode, any confirmations from them yet?
banteg@banteg
anyone recognizes this "external rpc"
English

Not really, I don't even know how KelpDAO is exactly related here. This post is about a technical design issue, based on the official message from LayerZero. And such kind of issues it's not something unique. It's not pointing to someone particular about poor design, it's a highlight of a general poor approach, where for years people refused to spend time and money on RPC reliability.
English

@constantine_rm Feels like it's a bit of a blame game going on right now.
Let's see what KelpDAO says. Regardless, DeFi is currently in shambles.
English

Constantine | dRPC.ORG@constantine_rm
In dRPC you can run a quorum of data providers, including internal nodes, with custom rules for quorum. We made it in 2023: #why-use-verification" target="_blank" rel="nofollow noopener">drpc.org/docs/gettingst…
. For a mission-critical application like a bridge or oracle, there's no excuse not to set it up. But they didn’t. The framing of the recent KelpDAO and LayerZero incidents as some novel attack vector, or the work of meaningfully smarter attackers, is mostly wrong. The actual failure mode - applications trusting a single RPC endpoint to return honest data - has been discussed openly for years, by @VitalikButerin, @lomashuk, @MicahZoltu, @wagmiAlexander, @ChainLinkGod, @banteg, and many others. It is neither new nor subtle. A closely related failure happened in 2022 with the Ankr DNS hijack on Polygon and Fantom: x.com/Mudit__Gupta/s… The point here isn't ideological. In a 24/7 market where automated systems act on RPC responses in real time, assuming one provider will always return correct data is a system-level risk. There is no T+2 window in which a human notices the error and reverses it. When we launched dRPC, cross-verification across a permissioned set of RPC providers was the core idea. The original repo and docs are still up (although outdated since then): -#why-use-verification" target="_blank" rel="nofollow noopener">drpc.org/docs/gettingst… - github.com/drpcorg/drpc-s… We used a simple quorum rather than zk-based verification, partly to test real demand before overbuilding. Two observations from that period: 1. The demand was not there. In public, everyone agreed with the thesis. In private, the responses were "we are not ready to pay more for quorum," or "yes, we could apply it to sensitive paths only, but it's not a priority." 2. The risk was real. The market is now discovering this at a cost of roughly $250M. Because full cross-verification on every request is overkill for most workloads, we eventually shifted toward shadow checks — randomized background comparisons across providers that detect and eject unhealthy nodes before they serve meaningful traffic. This is a reasonable compromise for general workloads. It is not a substitute for quorum on sensitive paths. So the practical rule, for anyone building infrastructure whose failure mode is user funds: 1. Use at least 3–5 independent, reliable RPC providers. 2. Do not build your load balancer on training wheels. Something like drpc.org/nodecore-open-… is open source, free, and almost certainly better than what you would build in-house. Contributing to it is a better use of time than reinventing it. You cannot defend against every possible attack. But this particular class is avoidable at low cost, if you are willing to treat RPC as a system-level dependency rather than a commodity input. That is a reasonable bar for anything meant to serve more than a narrow circle of users. We will update the dRPC NodeCore (drpc.org/nodecore-open-…) with strict rules for quorum on your side in the near future, stay tuned. If you have more sophisticated requirements for security, we are fully open for your requests - feel free to each me our via DM here or by email kz@drpc.org English

I'm not really understanding how "other providers" were DDoSed by <20M eth_calls during a couple of hours, based on the provided screenshot.
And as @ChainLinkGod mentioned below, there are no clear statements on who compromised. I believe most likely it was in-house nodes. Because it's quite logical from a typical lb logic perspective:
1. lb estimate fastest nodes
2. lb send requests to it
So you don't need to DDoS anybody, you can just compromise in-house nodes which are closest to the Gateway and considered "fastest" - profit.
We initially built our system to cover such issues on RPC centralization, because such a vector of attack is not new at all, and it was just a question of time, when it would hit. Will make a post with my thoughts on all of this today.
English

The attack was
1. North Korea figured out which RPC providers LZ was using
2. They compromised two of the providers to make them return fake data
3. DDoSed other providers to shut them down, forcing LZ to use the bad ones
AFAIK I was the only one who actually called it

LayerZero@LayerZero_Core
English

went through layerzero gasolina aws deployment repo + extracted app source.
tl;dr concerning
the reference deployment is public by design. and the sample providers.json ships with rpc quorum: 1 on every mainnet chain.
1. the recommended cdk stack puts a public api gateway in front of a private alb in front of fargate in private subnets. publicLoadBalancer: false, taskSubnets: PRIVATE_WITH_NAT, and an HttpApi with HttpAlbIntegration. the readme literally tells operators to send the resulting ApiGatewayUrl to layerzero labs.
2. no authorizer, no iam auth mode, no ip allowlist, no waf, no route-level policy anywhere in the repo. the app itself (bootstrap.ts) registers /provider-health, which leaks configured rpcs. server.listen(port) without host arg binds to public ip.
3. cdk/gasolina/config/providers/mainnet/providers.json sets quorum: 1 for ethereum, bsc, polygon, arbitrum, optimism, fantom, and the rest. multiple rpc urls are configured as failover, not consensus. the multiprovider code only enforces quorum when quorum > 1 and explicitly bypasses the wrapper when it's 1. rpcs are mostly public endpoints (llamarpc, publicnode, ankr).
4. provider config lives in an s3 bucket that the cdk stack creates, uploads to, and passes via env vars (PROVIDER_CONFIG_TYPE, CONFIG_BUCKET_NAME). so the trust boundary is the app + the mutable config plane + the upstream rpc tier + whatever's in front of api gateway.
5. operators are told to validate by curling the public url for /available-chains, /signer-info?chainName=ethereum, /provider-health (again, leaks rpc). external reachability is an encouraged documented requirement.
caveats: this is the public repo and extracted non-public source. it doesn't prove the config they had for kelp bridge. but the public info and the defaults the operators are pointed at look concerning.
read more here: gist.github.com/banteg/2fde29d…
English

@big_duca @0xArdent @Quicknode @Helius Better to add @dRPCorg in your balancer. Not just +1 provider, but dozens with auto routing under the hood.
Plus for load balancing itself on your infra you can use drpc.org/nodecore-open-…
English

@0xArdent @Quicknode @Helius We use helius!
We have like 2-3 providers for every integration tho for reliability.
English

Anyone know someone from @Quicknode?
We are burning through credits and need to upgrade to a higher plan asap.
English

Finally, you got what your lovely agent always asked for, please meet the dRPC Agent Skills!
dRPC // AI-powered RPC Infrastructure@dRPCorg
Why are you still writing RPC calls in 2026? What if your AI agent could just ask for blockchain data and get it instantly? Learn like a PRO on the thread 👇
English
Constantine | dRPC.ORG retweetledi

@vasily_sumanov @wagmiAlexander @wmougayar @aerodrome @CurveFinance For sure just yet another vibe code genius 😅
English

@wagmiAlexander @wmougayar Someone who did this dashboard has wrong numbers in it.
Revenue for @aerodrome @CurveFinance and some other projects is much bigger
This data is wrong, and those who built this dashboard isn’t deep in what he did and even didn’t verify his numbers
English

Someone ranked 50 DeFi protocols by Revenue Per Employee with an interesting dashboard:
defi-efficiency-indexx.onrender.com
Spoiler: The top 3 are GMX, Uniswap, Aave.
by Akpan Daniel medium.com/p/we-ranked-50…

English

@wagmiAlexander Glad to hear that people starting understand this, finally
English
Constantine | dRPC.ORG retweetledi
Constantine | dRPC.ORG retweetledi

@tempo Mainnet is live 🚀
A new chain purpose-built for real-world payments, not general-purpose experimentation.
Builders can now start using Tempo via public RPC endpoints 👇
drpc.org/chainlist/temp…
English

Looking for reliable and performant RPC for @tempo ?
You should know where to go -> dRPC
tempo-mainnet.drpc.org
English




