

Boxer
1.5K posts

@0xBoxer
works with data for @Dune Talking about onchain data on @indexed_pod



🚨NEW EPISODE🚨 The Hidden Risks of Crypto Bridges Today we’re joined by @donnoh_eth, Head of Research @l2beat, to dive deep into interoperability, bridging risk, and the hidden trust assumptions behind cross-chain assets. In this episode we’re discussing: - Luca’s background and path into crypto - The L2 roadmap debate and @ethereum's direction - The new L2BEAT interoperability dataset - Research goals behind the interop dashboard - Lock & mint vs burn & mint bridges - Intent-based bridging and counterparty risk - Liquidity providers and bridge execution risk - Canonical vs non-canonical tokens - Wrapped asset systemic risk - Multi-chain token configurations (LayerZero-style) - Bridge exploits and historical failures - The future of rollups, shared stacks & competition And much more—enjoy! — Timestamps: (00:00) Introduction (01:05) Luca’s crypto background (04:36) Latest L2BEAT project (13:20) Rollup value proposition (16:08) L2 roadmap hot takes (20:22) Interop dataset overview (23:19) Research goals explained (29:45) Non-mint bridging model (35:24) Lock & mint mechanics (38:47) Non-issuer token bridging (44:31) Bridge aggregator UX (52:12) Risky token examples (57:03) Multi-chain failure risks (1:01:12) Closing thoughts

LLMs process text from left to right — each token can only look back at what came before it, never forward. This means that when you write a long prompt with context at the beginning and a question at the end, the model answers the question having "seen" the context, but the context tokens were generated without any awareness of what question was coming. This asymmetry is a basic structural property of how these models work. The paper asks what happens if you just send the prompt twice in a row, so that every part of the input gets a second pass where it can attend to every other part. The answer is that accuracy goes up across seven different benchmarks and seven different models (from the Gemini, ChatGPT, Claude, and DeepSeek series of LLMs), with no increase in the length of the model's output and no meaningful increase in response time — because processing the input is done in parallel by the hardware anyway. There are no new losses to compute, no finetuning, no clever prompt engineering beyond the repetition itself. The gap between this technique and doing nothing is sometimes small, sometimes large (one model went from 21% to 97% on a task involving finding a name in a list). If you are thinking about how to get better results from these models without paying for longer outputs or slower responses, that's a fairly concrete and low-effort finding. Read with AI tutor: chapterpal.com/s/1b15378b/pro… Get the PDF: arxiv.org/pdf/2512.14982









Meet the dune crew at @solana Breakpoint from Dec 9th-13th! 🇦🇪 We come bearing gifts: the best Solana onchain data 🎁 @0xBoxer @web3sly @AlsieLC









