
DCG714104
97 posts













went through layerzero gasolina aws deployment repo + extracted app source. tl;dr concerning the reference deployment is public by design. and the sample providers.json ships with rpc quorum: 1 on every mainnet chain. 1. the recommended cdk stack puts a public api gateway in front of a private alb in front of fargate in private subnets. publicLoadBalancer: false, taskSubnets: PRIVATE_WITH_NAT, and an HttpApi with HttpAlbIntegration. the readme literally tells operators to send the resulting ApiGatewayUrl to layerzero labs. 2. no authorizer, no iam auth mode, no ip allowlist, no waf, no route-level policy anywhere in the repo. the app itself (bootstrap.ts) registers /provider-health, which leaks configured rpcs. server.listen(port) without host arg binds to public ip. 3. cdk/gasolina/config/providers/mainnet/providers.json sets quorum: 1 for ethereum, bsc, polygon, arbitrum, optimism, fantom, and the rest. multiple rpc urls are configured as failover, not consensus. the multiprovider code only enforces quorum when quorum > 1 and explicitly bypasses the wrapper when it's 1. rpcs are mostly public endpoints (llamarpc, publicnode, ankr). 4. provider config lives in an s3 bucket that the cdk stack creates, uploads to, and passes via env vars (PROVIDER_CONFIG_TYPE, CONFIG_BUCKET_NAME). so the trust boundary is the app + the mutable config plane + the upstream rpc tier + whatever's in front of api gateway. 5. operators are told to validate by curling the public url for /available-chains, /signer-info?chainName=ethereum, /provider-health (again, leaks rpc). external reachability is an encouraged documented requirement. caveats: this is the public repo and extracted non-public source. it doesn't prove the config they had for kelp bridge. but the public info and the defaults the operators are pointed at look concerning. read more here: gist.github.com/banteg/2fde29d…














