
DecentMuse
3.7K posts

DecentMuse
@DecentMuse
Crypto OG 2013 Perpetual Student Enthusiast of the decentralized open source economic automation fabric of the internet.





NEW: We are short Ether $ETH, and ETH-linked securities, incl. $BMNR. We think ETH tokenomics are impaired following the December 2025 Fusaka upgrade. Vitalik knows it and is selling, while $ETH's most ardent bull, Tom Lee, is throwing good money after bad. $ETH is going lower.



BGD will be leaving Aave governance.aave.com/t/bgd-leaving-…








my L2 thesis is still the same: that the future largest consumers of blobs(ETH) will come from custom/corporate/collectives that want to customize their environment while plugging into the global digital substrate. i think major financial hubs and differentiated L2s will thrive, but the future of blockchain isn't just limited to the scope of finance/gambling, it goes far beyond that. blockchain isn't just a layer on top of the internet, it's the internet itself evolving it's own infrastructure/protocol stack in real time. blockchain is the pathway that the the internet (aka global social coordination layer) is using to evolve into a suitable digital economic substrate for a Type 1 civilization. with the state of where ZK proving is today, the L1 is able to scale like crazy now, without sacrificing it's core property - decentralization. it's insane how fast ZK tech is iterating. this is actually super omega bullish for ETH. regardless of L1 scaling factors (which scale L2s even more) L2s/modularity will always make sense for the use cases mentioned above (customization). i think a larger framing will eventually shift to value gained/tied through interoperability. as in, part of the reason you want to plug into the internet is to have some level of global reaching connectivity. depending on how important is it to have atomic composability, like a native rollup will offer where it's basically identical to everything happening on the L1... or the varying degrees on a gradient moving away from that, will determine how much "is it ethereum?" it needs to be. in some cases, eth-based DA/blobs won't be necessary or may be a hinderance, in other cases it'll be a good value anchor by measure of interoperability degree/ease/security/connectivity (however it unfolds). if you, like me, believe the world is becoming more global/digital, and the internet as we know it is continuing to evolve - then it's clear there's going to be exponentially more use cases to plug into this future global digital economic substrate. that means a ton of activity growth on the L1 + a ton of growth in entities/collectives that will materialize as L2s. The more L2s there are, the more L2s that will consume blobs(ETH), the more we will begin to see blob market saturation and true blob pricing emerge (we haven't even seen this yet). These L2s are structural ETH demand vectors - they have to perpetually consume ETH to continue operation, forever, for as long as they want to exist. That's omega bullish ETH too. So in conclusion, L1 scaling is omega bullish ETH, L2 scaling is omega bullish ETH... the common denominator is omega bullish ETH.


There have recently been some discussions on the ongoing role of L2s in the Ethereum ecosystem, especially in the face of two facts: * L2s' progress to stage 2 (and, secondarily, on interop) has been far slower and more difficult than originally expected * L1 itself is scaling, fees are very low, and gaslimits are projected to increase greatly in 2026 Both of these facts, for their own separate reasons, mean that the original vision of L2s and their role in Ethereum no longer makes sense, and we need a new path. First, let us recap the original vision. Ethereum needs to scale. The definition of "Ethereum scaling" is the existence of large quantities of block space that is backed by the full faith and credit of Ethereum - that is, block space where, if you do things (including with ETH) inside that block space, your activities are guaranteed to be valid, uncensored, unreverted, untouched, as long as Ethereum itself functions. If you create a 10000 TPS EVM where its connection to L1 is mediated by a multisig bridge, then you are not scaling Ethereum. This vision no longer makes sense. L1 does not need L2s to be "branded shards", because L1 is itself scaling. And L2s are not able or willing to satisfy the properties that a true "branded shard" would require. I've even seen at least one explicitly saying that they may never want to go beyond stage 1, not just for technical reasons around ZK-EVM safety, but also because their customers' regulatory needs require them to have ultimate control. This may be doing the right thing for your customers. But it should be obvious that if you are doing this, then you are not "scaling Ethereum" in the sense meant by the rollup-centric roadmap. But that's fine! it's fine because Ethereum itself is now scaling directly on L1, with large planned increases to its gas limit this year and the years ahead. We should stop thinking about L2s as literally being "branded shards" of Ethereum, with the social status and responsibilities that this entails. Instead, we can think of L2s as being a full spectrum, which includes both chains backed by the full faith and credit of Ethereum with various unique properties (eg. not just EVM), as well as a whole array of options at different levels of connection to Ethereum, that each person (or bot) is free to care about or not care about depending on their needs. What would I do today if I were an L2? * Identify a value add other than "scaling". Examples: (i) non-EVM specialized features/VMs around privacy, (ii) efficiency specialized around a particular application, (iii) truly extreme levels of scaling that even a greatly expanded L1 will not do, (iv) a totally different design for non-financial applications, eg. social, identity, AI, (v) ultra-low-latency and other sequencing properties, (vi) maybe built-in oracles or decentralized dispute resolution or other "non-computationally-verifiable" features * Be stage 1 at the minimum (otherwise you really are just a separate L1 with a bridge, and you should just call yourself that) if you're doing things with ETH or other ethereum-issued assets * Support maximum interoperability with Ethereum, though this will differ for each one (eg. what if you're not EVM, or even not financial?) From Ethereum's side, over the past few months I've become more convinced of the value of the native rollup precompile, particuarly once we have enshrined ZK-EVM proofs that we need anyway to scale L1. This is a precompile that verifies a ZK-EVM proof, and it's "part of Ethereum", so (i) it auto-upgrades along with Ethereum, and (ii) if the precompile has a bug, Ethereum will hard-fork to fix the bug. The native rollup precompile would make full, security-council-free, EVM verification accessible. We should spend much more time working out how to design it in such a way that if your L2 is "EVM plus other stuff", then the native rollup precompile would verify the EVM, and you only have to bring your own prover for the "other stuff" (eg. Stylus). This might involve a canonical way of exposing a lookup table between contract call inputs and outputs, and letting you provide your own values to the lookup table (that you would prove separately). This would make it easy to have safe, strong, trustless interoperability with Ethereum. It also enables synchronous composability (see: ethresear.ch/t/combining-pr… and ethresear.ch/t/synchronous-… ). And from there, it's each L2's choice exactly what they want to build. Don't just "extend L1", figure out something new to add. This of course means that some will add things that are trust-dependent, or backdoored, or otherwise insecure; this is unavoidable in a permissionless ecosystem where developers have freedom. Our job should make to make it clear to users what guarantees they have, and to build up the strongest Ethereum that we can.


Today marks an inflection in the Ethereum Foundation's long-term quantum strategy. We've formed a new Post Quantum (PQ) team, led by the brilliant Thomas Coratger (@tcoratger). Joining him is Emile, one of the world-class talents behind leanVM. leanVM is the cryptographic cornerstone of our entire post-quantum strategy. After years of quiet R&D, EF management has officially declared PQ security a top strategic priority. Our journey began in 2019, with the "Eth3.0 Quantum Security" presentation at StarkWare Sessions. Since 2024, PQ has been central to the @leanEthereum vision. The pace of PQ engineering breakthroughs since then has been nothing short of phenomenal. It's now 2026, timelines are accelerating. Time to go full PQ: → PQ ACD: Antonio Sanso (@asanso) kicks off a bi-weekly All Core Devs PQ transactions breakout call next month. These sessions focus on user-facing security, covering dedicated precompiles, account abstraction, and longer-term transaction signature aggregation with leanVM. → PQ foundations: Today we are announcing a $1M Poseidon Prize to harden the Poseidon hash function. We are betting big on hash-based cryptography to enjoy the strongest and leanest cryptographic foundations. Check out our other $1M PQ initiative, the Proximity Prize. → PQ devnets: Multi-client PQ consensus devnets are live! Shoutout to pioneers @zeamETH, @ReamLabs, @PierTwo_com, @geanclient, @ethlambda_lean, as well as established consensus teams Lighthouse, Grandine, and soon Prysm. This incredible teamwork is coordinated by @corcoranwill via weekly PQ interop calls. → PQ workshops: Building on last year's PQ workshop in Cambridge (see photo), the EF is hosting another 3-day PQ event in October. Top experts from around the world will convene. In addition, a PQ day is set for March 29 in Cannes just ahead of EthCC. → PQ FV and AI: Last week Alex Hicks (@alexanderlhicks) ran a specialised maths AI for 8 hours, at a $200 cost. It one-shotted a formal proof one of the hardest lemmas in the foundations of hash-based snarks. Mind-blowing. Applied cryptography will never be the same. → PQ roadmap: A comprehensive breakdown of the EF's proposed PQ strategy will be shared soon™ on pq[.]ethereum[.]org. The roadmap targets a full transition in coming years with zero loss of funds and zero downtime. Stay tuned :) → PQ education: The ZKPodcast (@zeroknowledgefm) is producing a 6-part video series on Ethereum's PQ strategy. EF Enterprise Acceleration is also preparing material for enterprises and nation-states. Finally, Ethereum is now represented on the PQ advisory board that Coinbase announced yesterday. Believe in something. Believe in PQ security.


An important, and perenially underrated, aspect of "trustlessness", "passing the walkaway test" and "self-sovereignty" is protocol simplicity. Even if a protocol is super decentralized with hundreds of thousands of nodes, and it has 49% byzantine fault tolerance, and nodes fully verify everything with quantum-safe peerdas and starks, if the protocol is an unwieldy mess of hundreds of thousands of lines of code and five forms of PhD-level cryptography, ultimately that protocol fails all three tests: * It's not trustless because you have to trust a small class of high priests who tell you what properties the protocol has * It doesn't pass the walkaway test because if existing client teams go away, it's extremely hard for new teams to get up to the same level of quality * It's not self-sovereign because if even the most technical people can't inspect and understand the thing, it's not fully yours It's also less secure, because each part of the protocol, especially if it can interact with other parts in complicated ways, carries a risk of the protocol breaking. One of my fears with Ethereum protocol development is that we can be too eager to add new features to meet highly specific needs, even if those features bloat the protocol or add entire new types of interacting components or complicated cryptography as critical dependencies. This can be nice for short-term functionality gains, but it is highly destructive to preserving long-term self-sovereignty, and creating a hundred-year decentralized hyperstructure that transcends the rise and fall of empires and ideologies. The core problem is that if protocol changes are judged from the perspective of "how big are they as changes to the existing protocol", then the desire to preserve backwards compatibility means that additions happen much more often than subtractions, and the protocol inevitably bloats over time. To counteract this, the Ethereum development process needs an explicit "simplification" / "garbage collection" function. "Simplification" has three metrics: * Minimizing total lines of code in the protocol. An ideal protocol fits onto a single page - or at least a few pages * Avoiding unnecessary dependencies on fundamentally complex technical components. For example, a protocol whose security solely depends on hashes (even better: on exactly one hash function) is better than one that depends on hashes and lattices. Throwing in isogenies is worst of all, because (sorry to the truly brilliant hardworking nerds who figured that stuff out) nobody understands isogenies. * Adding more _invariants_: core properties that the protocol can rely on, for example EIP-6780 (selfdestruct removal) added the property that at most N storage slots can be changedakem per slot, significantly simplifying client development, and EIP-7825 (per-tx gas cap) added a maximum on the cost of processing one transaction, which greatly helps ZK-EVMs and parallel execution. Garbage collection can be piecemeal, or it can be large-scale. The piecemeal approach tries to take existing features, and streamline them so that they are simpler and make more sense. One example is the gas cost reforms in Glamsterdam, which make many gas costs that were previously arbitrary, instead depend on a small number of parameters that are clearly tied to resource consumption. One large-scale garbage collection was replacing PoW with PoS. Another is likely to happen as part of Lean consensus, opening the room to fix a large number of mistakes at the same time ( youtube.com/watch?v=10Ym34… ). Another approach is "Rosetta-style backwards compatibility", where features that are complex but little-used remain usable but are "demoted" from being part of the mandatory protocol and instead become smart contract code, so new client developers do not need to bother with them. Examples: * After we upgrade to full native account abstraction, all old tx types can be retired, and EOAs can be converted into smart contract wallets whose code can process all of those transaction types * We can replace existing precompiles (except those that are _really_ needed) with EVM or later RISC-V code * We can eventually change the VM from EVM to RISC-V (or other simpler VM); EVM could be turned into a smart contract in the new VM. Finally, we want to move away from client developers feeling the need to handle all older versions of the Ethereum protocol. That can be left to older client versions running in docker containers. In the long term, I hope that the rate of change to Ethereum can be slower. I think for various reasons that ultimately that _must_ happen. These first fifteen years should in part be viewed as an adolescence stage where we explored a lot of ideas and saw what works and what is useful and what is not. We should strive to avoid the parts that are not useful being a permanent drag on the Ethereum protocol. Basically, we want to improve Ethereum in a way that looks like this:







