Fahad Saleh

832 posts

Fahad Saleh banner
Fahad Saleh

Fahad Saleh

@cryptoeconprof

Emerson/Merrill Lynch Professor @UF. Advisor @CUSEAS CDFT. FinTech Fellow @CornellMBA. Co-Organizer for CBER Forum. PhD @NYUStern MSc @Columbia BSc @CornellEng.

Katılım Ağustos 2021
151 Takip Edilen963 Takipçiler
Sabitlenmiş Tweet
Fahad Saleh
Fahad Saleh@cryptoeconprof·
As tokenization of traditional financial assets gains regulatory attention, a fundamental question demands rigorous analysis: Can blockchain technology provide secure settlement? In a new paper with @camharvey (@DukeU) and Kose John (@NYUStern), we develop an economic model to address this question. Our analysis yields two main insights. First: blockchain security is tied to blockchain productivity. A blockchain generating more economic value for users can share that value with validators through higher staking rewards. In turn, higher rewards attract more staked capital, raising the cost for any adversary seeking to disrupt settlement. Second: under reasonable conditions, a PoS blockchain can be secured against arbitrarily large attack incentives. This may seem surprising, but it follows from a familiar financial phenomenon that prior blockchain literature has overlooked: price impact. To disrupt settlement, an attacker must acquire a large fraction of the blockchain's native asset. However, staked assets are subject to withdrawal delays and cannot be traded immediately, so a higher staking ratio translates directly into a smaller circulating float of the blockchain's native asset. In particular, the attacker must acquire what they need from unstaked assets, and buying a large share of a small supply drives up the price. This is where the economics become powerful. Higher staking rewards attract more capital into staking, which shrinks circulating supply, amplifying the price impact an attacker would face. Because staking responds to incentives, the staking ratio can be engineered to achieve a desired level of security. Our model shows that, by setting staking rewards appropriately, price impact can be made arbitrarily large, rendering attacks unprofitable regardless of potential gains. A concrete illustration: Ethereum currently has roughly 30% of ether staked. A majority attack would require purchasing another 30% from the 70% in circulation. Our model shows this generates a price impact of approximately 75%, a cost prior literature has failed to incorporate. Importantly, 75% reflects current staking levels. By increasing staking incentives to raise the staking ratio, price impact can be made larger. This is precisely how a blockchain can be secured against arbitrarily large attack incentives. 📄 Paper: papers.ssrn.com/sol3/papers.cf…
English
1
5
6
2.2K
Fahad Saleh retweetledi
Eleanor Terrett
Eleanor Terrett@EleanorTerrett·
🚨NEW: The @SECGov’s latest guidance on tokens’ legal status when it comes to investment contracts leaves many interpretive questions unanswered, says @NYcryptolawyer. “It’s best viewed as a bridge toward Clarity, rather than a long term end state.”
English
10
32
262
20.8K
Ro Khanna
Ro Khanna@RoKhanna·
@JEdgar95100348 Lucas oversaw my undergrad economic thesis paper and Becker worked with me to plan an economic panel discussion with him and Sen at Chicago.
English
10
3
98
21K
Fahad Saleh retweetledi
Julian
Julian@_julianma·
Find the model and the estimation in the full paper here: arxiv.org/abs/2602.20771 Let us know your theory of why this yield gap persists between Ethereum's two largest DeFi protocols in the comments or in DM!
English
3
2
23
2.5K
Fahad Saleh retweetledi
Avalanche Policy Coalition 🔺
Here’s another way to think about proof-of-stake security. 🧐Imagine trying to secretly buy 40% of a public company. You can’t just purchase it at yesterday’s stock price. The more shares you buy: • The price moves • Other investors notice • Supply dries up • Sellers demand more Now translate that to a blockchain. 👉🏼To disrupt settlement, an attacker would need a massive share of staked tokens. But those tokens aren’t all sitting on an exchange waiting to be scooped up. Many are locked & many are held by long-term participants & large purchases push the price up fast. So the cost of taking control doesn’t rise linearly. It accelerates. 📈 That’s the economic layer of blockchain security and it’s often overlooked. We explored this dynamic and what it entails on our latest podcast. 🎧Listen Here: 🟢 open.spotify.com/episode/3ppgAV… 🟣 podcasts.apple.com/us/podcast/ep-… 🌐avalanchepolicy.com/en/podcast/ep-…
English
0
5
8
268
Fahad Saleh
Fahad Saleh@cryptoeconprof·
Dear academic economists: Please stop claiming that the goal of the blockchain community is to build a decentralized system for the sake of having a decentralized system. Decentralization has never been the goal although it may help to achieve the goal. EF lays out the goal clearly here: "Our ultimate goal is for Ethereum to pass the walkaway test: its protocol and core application layers become robust and trustless enough that they would continue to reliably function and evolve even if the Foundation and today's core developers disappeared tomorrow." Notably, centralization in aspects of the ecosystem (e.g., the builder market) does not imply that the system fails the walkaway test and therefore does not imply that blockchains are failing to achieve their intended goals. Observing centralization and declaring failure is a strawman, not a critique.
Ethereum Foundation@ethereumfndn

1/ The Mandate clearly states what must be protected: EF will, above all else, remain focused on an Ethereum that is censorship resistant, open source, private, and secure (CROPS), in the service of user self-sovereignty, resistant to extraction and with seamless UX. These are conditions that make Ethereum worth building, using, and defending. Read the full blog here: blog.ethereum.org/2026/03/13/ef-…

English
1
0
7
576
Fahad Saleh retweetledi
Aavescan
Aavescan@aavescan·
New paper from @ethereumfndn studying ETH yield markets. Using Aavescan data, the authors explore inefficiency between staking and lending markets.
Julian@_julianma

Why do so many people choose to lend out ETH on Aave for 2% instead of stake via Lido for 3% yield? In a new paper with Joel Hasbrouck, @cryptoeconprof, and @casparschwa we build a structural econometric model and estimate it with data from Aave and Lido. The result? There is a huge market inefficiency that cannot be explained by smart contract risk, depeg risk, or any other risk.

English
1
2
9
848
Fahad Saleh
Fahad Saleh@cryptoeconprof·
The research university business model has survived decades without serious disruption. That's likely to change though. At research universities, teaching currently subsidizes research. This has been acceptable to top universities because they could place their undergraduate and masters students into high paying jobs despite the lack of focus on teaching. The degree was worth it because of what it signaled to employers, not necessarily because of what it taught. AI, however, threatens to change this. Specifically, when AI replaces the entry level PowerPoint jockeys graduating out of college, the placement premium disappears. And when the placement premium disappears, the current research university business model breaks.
Prof. Brian Keating@DrBrianKeating

The Broken Priorities of Academia | (@AswathDamodaran)

English
0
0
6
1.2K
Fahad Saleh
Fahad Saleh@cryptoeconprof·
"The truth is that most people inside the system don’t spend much time reflecting on how broken it is because their careers and livelihoods depend on navigating it." This statement could be applied to many other contexts both inside and outside of academia. You are highlighting a generic problem of microeconomic incentives whereby those benefiting from a system do not want to disrupt that system. Disruption needs to come from external forces.
English
0
0
5
244
Jason Locasale
Jason Locasale@LocasaleLab·
Most scientists feel this frustration but still play the game. I did too for many years. The truth is that most people inside the system don’t spend much time reflecting on how broken it is because their careers and livelihoods depend on navigating it. Everyone knows the publishing system is a racket. Journals capture enormous profits for coordinating unpaid peer review while scientists pay thousands of dollars for the privilege of publishing their own work. But institutions have never developed credible ways to evaluate scientific merit independent of journal prestige. So the system persists. This is exactly why meaningful reform will likely have to come from outside the existing structure. Journals are only one piece of a much larger biomedical research industrial complex that needs reform.
Ruslan Rust@rust_ruslan

I currently have three papers in review at "high impact" journals. One of them has been sitting there for two years. In that time my daughter was born and learned how to walk, but apparently publishing a PDF was still not possible for me. For another one, after four months in review the editor told me they cannot find a second reviewer and asked me to suggest more reviewers. A third one sent me a message in 2026 saying the PDF I uploaded was larger than 10 MB and that I should please reupload everything to make the file smaller. All of this just to eventually pay between 7,000 and 12,000 USD per paper so someone can officially approve that the science we do is "legitimate". Reminder: not a single reviewer will be compensated here. I still don't understand how we as scientists can collectively be so smart when doing science and still tolerate a system like this when it comes to sharing our findings. We should move to preprints plus open review, whether human or AI, asap. So frustrated about it. I'd suggest sharing your work on bioRxiv or medRxiv, reading and reviewing preprints when you can, and highlighting good research, especially if it is still a preprint. Try platforms like ResearchHub (that pay for peer review) and experiment with AI based reviewers for faster feedback. Instead I read this as a proposed "revolutionary" measure:

English
11
9
91
9.3K
Fahad Saleh
Fahad Saleh@cryptoeconprof·
Blockchain and AI are complementary technologies. Specifically, AI agents need data to be useful, and a blockchain is an ideal backend to store that data. Why? Because a blockchain can serve as a common point of access for data while also giving users discretion over how that access is granted. This is important because right now your data is fragmented across many platforms, each with its own rules and each able to revoke access at any time. There's no unified way to let an agent work across your digital life and you have limited say in what agents can access. If the data is onchain, an agent could pull from one place and on your terms with no platform serving as a bottleneck. Of course, to make this work, we also need to resolve privacy issues, but that's very much part of the crypto roadmap @_julianma. @AgostinoCapponi, @skominers and I discussed this topic in some detail recently. Check out our conversation: youtube.com/watch?v=EJFxN_…
YouTube video
YouTube
English
2
6
8
621
Fahad Saleh retweetledi
Julian
Julian@_julianma·
Why do so many people choose to lend out ETH on Aave for 2% instead of stake via Lido for 3% yield? In a new paper with Joel Hasbrouck, @cryptoeconprof, and @casparschwa we build a structural econometric model and estimate it with data from Aave and Lido. The result? There is a huge market inefficiency that cannot be explained by smart contract risk, depeg risk, or any other risk.
Julian tweet media
English
43
18
177
44.8K
Fahad Saleh retweetledi
Julian
Julian@_julianma·
What happens though is that as Aave yields increase, Lido yields surprisingly decrease! Empirical yields are negatively correlated.
Julian tweet media
English
4
1
16
2.6K
Fahad Saleh retweetledi
vitalik.eth
vitalik.eth@VitalikButerin·
One important technical item that I forgot to mention is the proposed switch from Casper FFG to Minimmit as the finality gadget. To summarize, Casper FFG provides two-round finality: it requires each attester to sign once to "justify" the block, and then again to "finalize" it. Minimmit only requires one round. In exchange, Minimmit's fault tolerance (in our parametrization) drops to 17%, compared to Casper FFG's 33%. Within Ethereum consensus discussions, I have always been the security assumptions hawk: I've insisted on getting to the theoretical bound of 49% fault tolerance under synchrony, kept pushing for 51% attack recovery gadgets, came up with DAS to make data availability checks dishonest-majority-resistant, etc. But I am fine with Minimmit's properties, in fact even enthusiastic in some respects. In this post, I will explain why. Let's lay out the exact security properties of both 3SF (not the current beacon chain, which is needlessly weak in many ways, but the ideal 3SF) and Minimmit. "Synchronous network" means "network latency less than 1/4 slot or so", "asynchronous network" means "potentially very high latency, even some nodes go offline for hours at a time". The percentages ("attacker has <33%") refer to percentages of active staked ETH. ## Properties of 3SF Synchronous network case: * Attacker has p < 33%: nothing bad happens * 33% < p < 50%: attacker can stop finality (at the cost of losing massive funds via inactivity leak), but the chain keeps progressing normally * 50% < p < 67%: attacker can censor or revert the chain, but cannot revert finality. If an attacker censors, good guys can self-organize, they can stop contributing to a censoring chain, and do a "minority soft fork" * p > 67%: attacker can finalize things at will, much harder for good guys to do minority soft fork Asynchronous network case: * Attacker has p < 33%: cannot revert finality * p > 33%: can revert finality, at the cost of losing massive funds via slashing ## Properties of Minimmit Synchronous network case: * Attacker has p < 17%: nothing bad happens * 17% < p < 50%: attacker can stop finality (at the cost of losing massive funds via inactivity leak), but the chain keeps progressing normally * 50% < p < 83%: attacker can censor or revert the chain, but cannot revert finality. If an attacker censors, good guys can self-organize, they can stop contributing to a censoring chain, and do a "minority soft fork" * p > 83%: attacker can finalize things at will, much harder for good guys to do minority soft fork Asynchronous network case: * Attacker has p < 17%: cannot revert finality * p > 17%: can revert finality, at the cost of losing massive funds via slashing I actually think that the latter is a better tradeoff. Here's my reasoning why: * The worst kind of attack is actually not finality reversion, it's censorship. The reason is that finality reversion creates massive publicly available evidence that can be used to immediately cost the attacker millions of ETH (ie. billions of dollars), whereas censorship requires social coordination to get around * In both of the above, a censorship attack requires 50% * A censorship attack becomes *much harder* to coordinate around when the censoring attacker can unilaterally finalize (ie. >67% in 3SF, >83% in Minimmit). If they can't, then if the good guys counter-coordinate, you get two non-finalizing chains dueling for a few days, and users can pick on. If they can, then there's no natural schelling point to coordinate soft-forking * In the case of a client bug, the worst thing that can happen is finalizing something bugged. In 3SF, you only need 67% of clients to share a bug for it to finalize, in Minimmit, you need 83%. Basicallly, Minimmit maximizes the set of situations that "default to two chains dueling each other", and that is actually a much healthier and much more recoverable outcome than "the wrong thing finalizing". We want finality to mean final. So in situations of uncertainty (whether attacks or software bugs), we should be more okay with having periods of hours or days where the chain does not finalize, and instead progresses based on the fork choice rule. This gives us time to think and make sure which chain is correct. Also, I think the "33% slashed to revert finality" of 3SF is overkill. If there is even eg. 15 million ETH staking, then that's 5M ($10B) slashed to revert the chain once. If you had $10B, and you are willing to commit mayhem of a type that violates many countries' computer hacking laws, there are FAR BETTER ways to spend it than to attack a chain. Even if your goal is breaking Ethereum, there are far better attack vectors. And so if we have the baseline guarantee of >= 17% slashed to revert finality (which Minimmit provides), we should judge the two systems from there based on their other properties - where, for the reasons I described above, I think Minimmit performs better.
vitalik.eth@VitalikButerin

Finally, the block building pipeline. In Glamsterdam, Ethereum is getting ePBS, which lets proposers outsource to a free permissionless market of block builders. This ensures that block builder centralization does not creep into staking centralization, but it leaves the question: what do we do about block builder centralization? And what are the _other_ problems in the block building pipeline that need to be addressed, and how? This has both in-protocol and extra-protocol components. ## FOCIL FOCIL is the first step into in-protocol multi-participant block building. FOCIL lets 16 randomly-selected attesters each choose a few transactions, which *must* be included somewhere in the block (the block gets rejected otherwise). This means that even if 100% of block building is taken over by one hostile actor, they cannot prevent transactions from being included, because the FOCILers will push them in. ## "Big FOCIL" This is more speculative, but has been discussed as a possible next step. The idea is to make the FOCILs bigger, so they can include all of the transactions in the block. We avoid duplication by having the i'th FOCIL'er by default only include (i) txs whose sender address's first hex char is i, and (ii) txs that were around but not included in the previous slot. So at the cost of one slot delay, only censored txs risk duplication. Taking this to its logical conclusion, the builder's role could become reduced to ONLY including "MEV-relevant" transactions (eg. DEX arbitrage), and computing the state transition. ## Encrypted mempools Encrypted mempools are one solution being explored to solve "toxic MEV": attacks such as sandwiching and frontrunning, which are exploitative against users. If a transaction is encrypted until it's included, no one gets the opportunity to "wrap" it in a hostile way. The technical challenge is: how to guarantee validity in a mempool-friendly and inclusion-friendly way that is efficient, and what technique to use to guarantee that the transaction will actually get decrypted once the block is made (and not before). ## The transaction ingress layer One thing often ignored in discussions of MEV, privacy, and other issues is the network layer: what happens in between a user sending out a transaction, and that transaction making it into a block? There are many risks if a hostile actor sees a tx "in the clear" inflight: * If it's a defi trade or otherwise MEV-relevant, they can sandwich it * In many applications, they can prepend some other action which invalidates it, not stealing money, but "griefing" you, causing you to waste time and gas fees * If you are sending a sensitive tx through a privacy protocol, even if it's all private onchain, if you send it through an RPC, the RPC can see what you did, if you send it through the public mempool, any analytics agency that runs many nodes will see what you did There has recently been increasing work on network-layer anonymization for transactions: exploring using Tor for routing transactions, ideas around building a custom ethereum-focused mixnet, non-mixnet designs that are more latency-minimized (but bandwidth-heavier, which is ok for transactions as they are tiny) like Flashnet, etc. This is an open design space, I expect the kohaku initiative @ncsgy will be interested in integrating pluggable support for such protocols, like it is for onchain privacy protocols. There is also room for doing (benign, pro-user) things to transactions before including them onchain; this is very relevant for defi. Basically, we want ideal order-matching, as a passive feature of the network layer without dependence on servers. Of course enabling good uses of this without enabling sandwiching involves cryptography or other security, some important challenges there. ## Long-term distributed block building There is a dream, that we can make Ethereum truly like BitTorrent: able to process far more transactions than any single server needs to ever coalesce locally. The challenge with this vision is that Ethereum has (and indeed a core value proposition is) synchronous shared state, so any tx could in principle depend on any other tx. This centralizes block building. "Big FOCIL" handles this partially, and it could be done extra-protocol too, but you still need one central actor to put everything in order and execute it. We could come up with designs that address this. One idea is to do the same thing that we want to do for state: acknowledge that >95% of Ethereum's activity doesn't really _need_ full globalness, though the 5% that does is often high-value, and create new categories of txs that are less global, and so friendly to fully distributed building, and make them much cheaper, while leaving the current tx types in place but (relatively) more expensive. This is also an open and exciting long-term future design space. firefly.social/post/lens/8144…

English
239
113
874
262.6K
Fahad Saleh retweetledi
Avalanche Policy Coalition 🔺
Avalanche Policy Coalition 🔺@AvalanchePolicy·
What actually makes a blockchain secure? 🤔Well what we know is: it’s not marketing or slogans. And it’s certainly not “just trust the protocol.” We decided to ask two economists, Fahad Saleh (@cryptoeconprof) and Kose John, in our recent Policy Layer podcast and they explained a simple but powerful idea: the more useful a blockchain is, the harder it is to attack. 🪓 Think about it like this. If no one uses a network, no one values it. And if no one values it, it’s cheap and easy to mess with. But if a blockchain has real apps, real users, and real money flowing through it, things change. In proof-of-stake systems, you can’t just “hack” the network. To take control, you have to buy a huge amount of the token. And the more people that use the network 👉🏼 the more valuable the token becomes. It’s like trying to buy half the houses in a city. If most owners aren’t selling, prices skyrocket fast. So security isn’t just about code or rules: it’s also about incentives, supply and demand and real economic activity. If you are curious about the future of finance and the economics behind security than this podcast might be for you! 🟢 open.spotify.com/episode/3ppgAV… 🟣 podcasts.apple.com/us/podcast/ep-… 🌐 avalanchepolicy.com/en/podcast/ep-…
English
0
1
1
108
Fahad Saleh retweetledi
vitalik.eth
vitalik.eth@VitalikButerin·
Now, account abstraction. We have been talking about account abstraction ever since early 2016, see the original EIP-86: github.com/ethereum/EIPs/… Now, we finally have EIP-8141 ( eips.ethereum.org/EIPS/eip-8141 ), an omnibus that wraps up and solves every remaining problem that AA was intended to address (plus more). Let's talk again about what it does. The concept, "Frame Transactions", is about as simple as you can get while still being highly general purpose. A transaction is N calls, which can read each other's calldata, and which have the ability to authorize a sender and authorize a gas payer. At the protocol layer, *that's it*. Now, let's see how to use it. First, a "normal transaction from a normal account" (eg. a multisig, or an account with changeable keys, or with a quantum-resistant signature scheme). This would have two frames: * Validation (check the signature, and return using the ACCEPT opcode with flags set to signal approval of sender and of gas payment) * Execution You could have multiple execution frames, atomic operations (eg. approve then spend) become trivial now. If the account does not exist yet, then you prepend another frame, "Deployment", which calls a proxy to create the contract (EIP-7997 ethereum-magicians.org/t/eip-7997-det… is good for this, as it would also let the contract address reliably be consistent across chains). Now, suppose you want to pay gas in RAI. You use a paymaster contract, which is a special-purpose onchain DEX that provides the ETH in real time. The tx frames are: * Deployment [if needed] * Validation (ACCEPT approves sender only, not gas payment) * Paymaster validation (paymaster checks that the immediate next op sends enough RAI to the paymaster and that the final op exists) * Send RAI to the paymaster * Execution [can be multiple] * Paymaster refunds unused RAI, and converts to ETH Basically the same thing that is done in existing sponsored transactions mechanisms, but with no intermediaries required (!!!!). Intermediary minimization is a core principle of non-ugly cypherpunk ethereum: maximize what you can do even if all the world's infrastructure except the ethereum chain itself goes down. Now, privacy protocols. Two strategies here. First, we can have a paymaster contract, which checks for a valid ZK-SNARK and pays for gas if it sees one. Second, we could add 2D nonces (see docs.erc4337.io/core-standards… ), which allow an individual account to function as a privacy protocol, and receive txs in parallel from many users. Basically, the mechanism is extremely flexible, and solves for all the use cases. But is it safe? At the onchain level, yes, obviously so: a tx is only valid to include if it contains a validation frame that returns ACCEPT with the flag to pay gas. The more challenging question is at the mempool level. If a tx contains a first frame which calls into 10000 accounts and rejects if any of them have different values, this cannot be broadcasted safely. But all of the examples above can. There is a similar notion here to "standard transactions" in bitcoin, where the chain itself only enforces a very limited set of rules, but there are more rules at the mempool layer. There are specific rulesets (eg. "validation frame must come before execution frames, and cannot call out to outside contracts") that are known to be safe, but are limited. For paymasters, there has been deep thought about a staking mechanism to limit DoS attacks in a very general-purpose way. Realistically, when 8141 is rolled out, the mempool rules will be very conservative, and there will be a second optional more aggressive mempool. The former will expand over time. For privacy protocol users, this means that we can completely remove "public broadcasters" that are the source of massive UX pain in railgun/PP/TC, and replace them with a general-purpose public mempool. For quantum-resistant signatures, we also have to solve one more problem: efficiency. Here's are posts about the ideas we have for that: firefly.social/post/lens/1gfe… firefly.social/post/x/2027405… AA is also highly complementary with FOCIL: FOCIL ensures rapid inclusion guarantees for transactions, and AA ensures that all of the more complex operations people want to make actually can be made directly as first-class transactions. Another interesting topic is EOA compatibility in 8141. This is being discussed, in principle it is possible, so all accounts incl existing ones can be put into the same framework and gain the ability to do batch operations, transaction sponsorship, etc, all as first-class transactions that fully benefit from FOCIL. Finally, after over a decade of research and refinement of these techniques, this all looks possible to make happen within a year (Hegota fork). firefly.social/post/bsky/qmaj…
English
316
407
2.4K
284K
Fahad Saleh retweetledi
jack
jack@jack·
we're making @blocks smaller today. here's my note to the company. #### today we're making one of the hardest decisions in the history of our company: we're reducing our organization by nearly half, from over 10,000 people to just under 6,000. that means over 4,000 of you are being asked to leave or entering into consultation. i'll be straight about what's happening, why, and what it means for everyone. first off, if you're one of the people affected, you'll receive your salary for 20 weeks + 1 week per year of tenure, equity vested through the end of may, 6 months of health care, your corporate devices, and $5,000 to put toward whatever you need to help you in this transition (if you’re outside the U.S. you’ll receive similar support but exact details are going to vary based on local requirements). i want you to know that before anything else. everyone will be notified today, whether you're being asked to leave, entering consultation, or asked to stay. we're not making this decision because we're in trouble. our business is strong. gross profit continues to grow, we continue to serve more and more customers, and profitability is improving. but something has changed. we're already seeing that the intelligence tools we’re creating and using, paired with smaller and flatter teams, are enabling a new way of working which fundamentally changes what it means to build and run a company. and that's accelerating rapidly. i had two options: cut gradually over months or years as this shift plays out, or be honest about where we are and act on it now. i chose the latter. repeated rounds of cuts are destructive to morale, to focus, and to the trust that customers and shareholders place in our ability to lead. i'd rather take a hard, clear action now and build from a position we believe in than manage a slow reduction of people toward the same outcome. a smaller company also gives us the space to grow our business the right way, on our own terms, instead of constantly reacting to market pressures. a decision at this scale carries risk. but so does standing still. we've done a full review to determine the roles and people we require to reliably grow the business from here, and we've pressure-tested those decisions from multiple angles. i accept that we may have gotten some of them wrong, and we've built in flexibility to account for that, and do the right thing for our customers. we're not going to just disappear people from slack and email and pretend they were never here. communication channels will stay open through thursday evening (pacific) so everyone can say goodbye properly, and share whatever you wish. i'll also be hosting a live video session to thank everyone at 3:35pm pacific. i know doing it this way might feel awkward. i'd rather it feel awkward and human than efficient and cold. to those of you leaving…i’m grateful for you, and i’m sorry to put you through this. you built what this company is today. that's a fact that i'll honor forever. this decision is not a reflection of what you contributed. you will be a great contributor to any organization going forward. to those staying…i made this decision, and i'll own it. what i'm asking of you is to build with me. we're going to build this company with intelligence at the core of everything we do. how we work, how we create, how we serve our customers. our customers will feel this shift too, and we're going to help them navigate it: towards a future where they can build their own features directly, composed of our capabilities and served through our interfaces. that's what i'm focused on now. expect a note from me tomorrow. jack
English
8.8K
6.7K
51.3K
64M
Fahad Saleh retweetledi
Justin Drake
Justin Drake@drakefjustin·
Introducing strawmap, a strawman roadmap by EF Protocol. Believe in something. Believe in an Ethereum strawmap. Who is this for? The document, available at strawmap[.]org, is intended for advanced readers. It is a dense and technical resource primarily for researchers, developers, and participants in Ethereum governance. Visit ethereum[.]org/roadmap for more introductory material. Accessible explainers unpacking the strawmap will follow soon™. What is the strawmap? The strawmap is an invitation to view L1 protocol upgrades through a holistic lens. By placing proposals on a single visual it provides a unified perspective on Ethereum L1 ambitions. The time horizon spans years, extending beyond the immediate focus of All Core Devs (ACD) and forkcast[.]org which typically cover only the next couple of forks. What are some of the highlights? The strawmap features five simple north stars, presented as black boxes on the right: → fast L1: fast UX, via short slots and finality in seconds → gigagas L1: 1 gigagas/sec (10K TPS), via zkEVMs and real-time proving → teragas L2: 1 gigabyte/sec (10M TPS), via data availability sampling → post quantum L1: durable cryptography, via hash-based schemes → private L1: first-class privacy, via shielded ETH transfers What is the origin story? The strawman roadmap originated as a discussion starter at an EF workshop in Jan 2026, partly motivated by a desire to integrate lean Ethereum with shorter-term initiatives. Upgrade dependencies and fork constraints became particularly effective at surfacing valuable discussion topics. The strawman is now shared publicly in a spirit of proactive transparency and accelerationism. Why the "strawmap" name? "Strawmap" is a portmanteau of "strawman" and "roadmap". The strawman qualifier is deliberate for two reasons: 1. It acknowledges the limits of drafting a roadmap in a highly decentralized ecosystem. An "official" roadmap reflecting all Ethereum stakeholders is effectively impossible. Rough consensus is fundamentally an emergent, continuous, and inherent uncertain process. 2. It underscores the document's status as a work-in-progress. Although it originated within the EF Protocol cluster, there are competing views held among its 100 members, not to mention a rich diversity of non-EFer views. The strawmap is not a prediction. It is an accelerationist coordination tool, sketching one reasonably coherent path among millions of possible outcomes. What is the strawmap time frame? The strawmap focuses on forks extending through the end of the decade. It outlines seven forks by 2029 based on a rough cadence of one fork every six months. While grounded in current expectations, these timelines should be treated with healthy skepticism. The current draft assumes human-first development. AI-driven development and formal verification could significantly compress schedules. What do the letters on top represent? The strawmap is organized as a timeline, with forks progressing from left to right. Consensus layer forks follow a star-based naming scheme with incrementing first letters: Altair, Bellatrix, Capella, Deneb, Electra, Fulu, etc. Upcoming forks such as Glamsterdam and Hegotá have finalized names. Other forks, like I* and J*, have placeholder names (with I* pronounced "I star"). What do the colors and arrows represent? Upgrades are grouped into three color-coded horizontal layers: consensus (CL), data (DL), execution (EL). Dark boxes denote headliners (see below), grey boxes indicate offchain upgrades, and black boxes represent north stars. An explanatory legend appears at the bottom. Within each layer, upgrades are further organized by theme and sub-theme. Arrows signal hard technical dependencies or natural upgrade progressions. Underlined text in boxes links to relevant EIPs and write-ups. What are headliners? Headliners are particularly prominent and ambitious upgrades. To maintain a fast fork cadence, the modern ACD process limits itself to one consensus and one execution headliner per fork. For example, in Glamsterdam, these headliners are ePBS and BALs, respectively. (L* is an exceptional fork, displaying two headliners tied to the bigger lean consensus fork. Lean consensus landing in L* would be a fateful coincidence.) Will the strawmap evolve? Yes, the strawmap is a living and malleable document. It will evolve alongside community feedback, R&D advancements, and governance. Expect at least quarterly updates, with the latest revision date noted on the document. Can I share feedback? Yes, feedback is actively encouraged. The EF Protocol strawmap is maintained by the EF Architecture team: @adietrichs, @barnabemonnot, @fradamt, @drakefjustin. Each has open DMs and can be reached at first.name@ethereum[.]org. General inquiries can be sent to strawmap@ethereum[.]org.
Justin Drake tweet media
English
198
412
1.6K
595.5K