Derek

602 posts

Derek banner
Derek

Derek

@dhsorens

research | formal verification protocol snarkification at @ethereumfndn

Katılım Haziran 2016
1.9K Takip Edilen366 Takipçiler
Derek retweetledi
nixo.eth 🦇🔊🥐
nixo.eth 🦇🔊🥐@nixorokish·
is ethereum development forgetting to consider its solo stakers? big resounding NOPE as @drakefjustin specifically lays out how the strawmap affects stakers, and which items they need to pay attention to, at the @ethstaker @ethcc stage
nixo.eth 🦇🔊🥐 tweet media
English
2
11
97
3K
Derek
Derek@dhsorens·
@nixorokish @EthCC Great coffee and venue too! Some of the best views of the bay are from the coffee floor of the venue
English
0
0
2
172
nixo.eth 🦇🔊🥐
nixo.eth 🦇🔊🥐@nixorokish·
appreciation for how well-run @ethcc is: ⤷ self-serve ticket kiosks (no lines) ⤷ childcare (wow) ⤷ coat and luggage checks ⤷ zero wifi issues ⤷ quick security in venue and also very safe around town! ⤷ side event venues all <6 minutes walking distance ⤷ talks ran on-time, didn't see any AV issues ⤷ stellar weather (i assume they ordered this in advance) the industry vibes could literally be rock bottom but ethcc in cannes would still be immaculate because of location and how well this thing is run kudos @jdetychey & @bettina_boon
nixo.eth 🦇🔊🥐 tweet medianixo.eth 🦇🔊🥐 tweet media
English
17
20
250
16.4K
Derek
Derek@dhsorens·
@ProfFeynman And lastly - one reason why the scientific world has lost its mind in many areas is because it has underappreciated its metaphysical foundations given to it by Christianity
English
0
0
0
6
Derek
Derek@dhsorens·
@ProfFeynman These are philosophical questions that relate most fundamentally to how we exist in relation to the world, and are arguably necessary for science to exist at all
English
1
0
0
12
Prof. Feynman
Prof. Feynman@ProfFeynman·
Religion is a culture of faith; science is a culture of doubt.
English
84
220
1.3K
70.5K
Derek
Derek@dhsorens·
"Lean implemented in Lean" confused me for a long time, until I started to pick up metaprogramming in Lean and discovered that everything was .. well .. still in Lean. @Leonard41111588’s essay summarizes my a-ha moment really well
Leonardo de Moura@Leonard41111588

Whenever I give a talk, people ask me: "What makes Lean different?", "Why did it succeed?" I finally wrote it down. Four things I believe, one honest weakness, and why "I fucking love this shit" keeps happening. leodemoura.github.io/blog/2026-4-2-…

English
0
1
10
1.1K
Derek
Derek@dhsorens·
YouTube video
YouTube
Derek@dhsorens

I'm giving a talk at @EthCC[9] this year on how we are integrating ZK proof technology into the Ethereum Protocol, and how we're using formal methods to do it as safely as possible. Despite its branding, formal verification is not a silver bullet and it requires a team of experts in many domains to ship this with genuinely high assurance. My talk is called "Safely Snarkifying the Ethereum Protocol" - If you'll be in Cannes next week, come to find out more!

English
0
1
4
665
Derek retweetledi
Leonardo de Moura
Leonardo de Moura@Leonard41111588·
Whenever I give a talk, people ask me: "What makes Lean different?", "Why did it succeed?" I finally wrote it down. Four things I believe, one honest weakness, and why "I fucking love this shit" keeps happening. leodemoura.github.io/blog/2026-4-2-…
English
5
53
280
24K
Derek retweetledi
Yoichi Hirai
Yoichi Hirai@pirapira·
Ethereum specs are written in Python. I want to use them in Lean 4 (programming language and proof assistant). So here is leanPython, a proof-of-concept Python interpreter written in Lean 4. It can run some tests in leanSpec (spec of @leanEthereum in Python). The performance looks too good. Python and leanPython are probably still very different. LeanPython has been vibe-coded without any inspections. I guess it’s possible to run tests and narrow the gap.
English
1
10
42
2.7K
Derek retweetledi
Type Theory Forall
Type Theory Forall@ttforall·
It’s my great honor to announce that Type Theory Forall has received a Web3 grant. We’re building a next-generation formalization stack for elliptic curve protocols, combining Rocq, Lean 4, and a custom DSL for proof-carrying cryptography. Hiring soon.
English
6
4
51
3.6K
Derek retweetledi
Alex Kontorovich
Alex Kontorovich@AlexKontorovich·
A preview of my talk tomorrow at the Newton Insitute @NewtonInstitute (comments welcome) My primary interest is research math: solving problems, proving theorems. Before 2019, I was accustomed to using Mathematica to check tedious, error-prone algebra in my papers. Do it once, and never waste time checking it again. But algebra was only part of the issue. If I had a lemma, and in a 60-page paper I might have 20 of them, with a dozen parameters all moving around in different ranges and needing to line up perfectly at the end, then even a single stray minus sign could kill the entire paper. The whole enterprise was extremely complex and fragile. (What I'm describing is very common in loads of fields in modern research math.) In 2019, I watched a lecture of Kevin Buzzard's, and realized the answer: I should use an interactive theorem prover like Lean to check my lemmas the same way Mathematica checks my algebra. (Of course, as I've since learned, there are many benefits to working formally beyond correctness, and these have been extensively enumerated elsewhere, so I won't repeat them here.) But my original motivation for getting involved in formalization was simple: I hoped it would speed up my workflow. It did not. In fact, formalization is brutally tedious, requiring painstakingly spelling out facts that to a human expert are blatantly obvious. Fast forward to 2025, and AI was getting genuinely good at helping with formalization. I was already using Claude rather extensively when we crossed the finish line on the "Medium" PNT in July 2025. By September 2025, Math Inc's Gauss system autoformalized the Strong PNT, writing over 20K lines of compiling Lean autonomously. Earlier this month, they outdid themselves again, writing 200K lines autonomously and formalizing Viazovska's theorems on optimal sphere packing in dimensions 8 and 24. So isn't that the dream? AI can now, in some instances, autoformalize very significant theorems. Can we mathematicians just get back to thinking, sketching, and letting AI do the formalization for us? Not so fast. Autoformalization only works because it is built on top of a big, comprehensive, efficient, coherent monorepo of high-quality formalized mathematics, namely Mathlib. And even in the PNT+ and Viazovska examples, the autoformalizations still depended on substantial earlier human work: setting up the right definitions, the right API, the right abstractions, and so on. So maybe we now get a nice positive feedback loop: Research -> formal math (thanks to AI) -> grows Mathlib -> enables more research. Still no. AI formalization, and frankly the first-pass human formalization too, is usually local, ad hoc, single-purpose work. It is not necessarily general, abstract, efficient, or reusable. So it does not in and of itself help grow Mathlib. The second arrow is broken. Actually, this is not some temporary annoyance, it is inevitable! The goals of doing research and building libraries are misaligned, like scrambling up a cliff versus building an elevator to the top. Both are trying to go up, but for completely different reasons and in completely different ways. In fact, it is even worse than that: the second arrow may make the feedback loop negative. Let us give that second arrow a name: "canonization". By canonization, I mean the process of taking a local, one-off formalization and turning it into library mathematics: general, reusable, coherent, efficient, and compatible with the rest of the monorepo. This is an extremely difficult and time-consuming task. It requires a large amount of prior knowledge and skill, often in several quite different areas at once. And here's why the feedback loop may be negative: while a rough formalization can certainly be a technical head start, socially it often strands the problem in the worst possible state: too solved to feel pressing, too idiosyncratic to be reusable. If a formalization already exists in some ad hoc form, then people are much less incentivized to do this work! They get less credit for succeeding, there is less urgency, and less motivation. Does this sound familiar? It's the same structural problem we had back in 2019, going from proved results to formalized results! So the answer should be obvious. In June 2025, I claimed that (quasi)autoformalization, meaning not entirely autonomous but allowing human intervention and steering, was the greatest short-term challenge in realizing the dream of speeding up research [K2025]. The corresponding claim today is: (Quasi)auto-canonization is the greatest short-term challenge for AI systems. I personally know of only one AI company so far that seems to be taking this challenge seriously, namely Harmonic with its Aristotle agent. Imagine if we get this right. Definitions will still be difficult to automate, but there are orders of magnitude fewer definitions than theorems. Once those foundations are laid (which will still be a ton of human time and effort!), everything else can scale on top. Right now, the vast majority of research mathematicians working in formalization are, very commendably, working toward growing Mathlib. But they comprise maybe 1% of all professional mathematicians. This is not necessarily because people do not want to work formally. It is because the current system does not match how most mathematicians want to work. People are diverse. They have different strengths and weaknesses, different interests, different workflows. If we embrace an ecosystem where people are encouraged to formalize freely, with heavy AI assistance, and where the right pieces later get (quasi)auto-canonized into the central monorepo, then I think we could potentially be in position, given the right incentives, training, and culture-shifts, to move from a handful to the majority of mathematicians doing math formally.
English
20
60
349
84.1K
Derek retweetledi
Justin Drake
Justin Drake@drakefjustin·
Introducing strawmap, a strawman roadmap by EF Protocol. Believe in something. Believe in an Ethereum strawmap. Who is this for? The document, available at strawmap[.]org, is intended for advanced readers. It is a dense and technical resource primarily for researchers, developers, and participants in Ethereum governance. Visit ethereum[.]org/roadmap for more introductory material. Accessible explainers unpacking the strawmap will follow soon™. What is the strawmap? The strawmap is an invitation to view L1 protocol upgrades through a holistic lens. By placing proposals on a single visual it provides a unified perspective on Ethereum L1 ambitions. The time horizon spans years, extending beyond the immediate focus of All Core Devs (ACD) and forkcast[.]org which typically cover only the next couple of forks. What are some of the highlights? The strawmap features five simple north stars, presented as black boxes on the right: → fast L1: fast UX, via short slots and finality in seconds → gigagas L1: 1 gigagas/sec (10K TPS), via zkEVMs and real-time proving → teragas L2: 1 gigabyte/sec (10M TPS), via data availability sampling → post quantum L1: durable cryptography, via hash-based schemes → private L1: first-class privacy, via shielded ETH transfers What is the origin story? The strawman roadmap originated as a discussion starter at an EF workshop in Jan 2026, partly motivated by a desire to integrate lean Ethereum with shorter-term initiatives. Upgrade dependencies and fork constraints became particularly effective at surfacing valuable discussion topics. The strawman is now shared publicly in a spirit of proactive transparency and accelerationism. Why the "strawmap" name? "Strawmap" is a portmanteau of "strawman" and "roadmap". The strawman qualifier is deliberate for two reasons: 1. It acknowledges the limits of drafting a roadmap in a highly decentralized ecosystem. An "official" roadmap reflecting all Ethereum stakeholders is effectively impossible. Rough consensus is fundamentally an emergent, continuous, and inherent uncertain process. 2. It underscores the document's status as a work-in-progress. Although it originated within the EF Protocol cluster, there are competing views held among its 100 members, not to mention a rich diversity of non-EFer views. The strawmap is not a prediction. It is an accelerationist coordination tool, sketching one reasonably coherent path among millions of possible outcomes. What is the strawmap time frame? The strawmap focuses on forks extending through the end of the decade. It outlines seven forks by 2029 based on a rough cadence of one fork every six months. While grounded in current expectations, these timelines should be treated with healthy skepticism. The current draft assumes human-first development. AI-driven development and formal verification could significantly compress schedules. What do the letters on top represent? The strawmap is organized as a timeline, with forks progressing from left to right. Consensus layer forks follow a star-based naming scheme with incrementing first letters: Altair, Bellatrix, Capella, Deneb, Electra, Fulu, etc. Upcoming forks such as Glamsterdam and Hegotá have finalized names. Other forks, like I* and J*, have placeholder names (with I* pronounced "I star"). What do the colors and arrows represent? Upgrades are grouped into three color-coded horizontal layers: consensus (CL), data (DL), execution (EL). Dark boxes denote headliners (see below), grey boxes indicate offchain upgrades, and black boxes represent north stars. An explanatory legend appears at the bottom. Within each layer, upgrades are further organized by theme and sub-theme. Arrows signal hard technical dependencies or natural upgrade progressions. Underlined text in boxes links to relevant EIPs and write-ups. What are headliners? Headliners are particularly prominent and ambitious upgrades. To maintain a fast fork cadence, the modern ACD process limits itself to one consensus and one execution headliner per fork. For example, in Glamsterdam, these headliners are ePBS and BALs, respectively. (L* is an exceptional fork, displaying two headliners tied to the bigger lean consensus fork. Lean consensus landing in L* would be a fateful coincidence.) Will the strawmap evolve? Yes, the strawmap is a living and malleable document. It will evolve alongside community feedback, R&D advancements, and governance. Expect at least quarterly updates, with the latest revision date noted on the document. Can I share feedback? Yes, feedback is actively encouraged. The EF Protocol strawmap is maintained by the EF Architecture team: @adietrichs, @barnabemonnot, @fradamt, @drakefjustin. Each has open DMs and can be reached at first.name@ethereum[.]org. General inquiries can be sent to strawmap@ethereum[.]org.
Justin Drake tweet media
English
205
415
1.6K
604.8K
Derek retweetledi
Justin Drake
Justin Drake@drakefjustin·
Today is a monumentous day for quantum computing and cryptography. Two breakthrough papers just landed (links in next tweet). Both papers improve Shor's algorithm, infamous for cracking RSA and elliptic curve cryptography. The two results compound, optimising separate layers of the quantum stack. The results are shocking. I expect a narrative shift and a further R&D boost toward post-quantum cryptography. The first paper is by Google Quantum AI. They tackle the (logical) Shor algorithm, tailoring it to crack Bitcoin and Ethereum signatures. The algorithm runs on ~1K logical qubits for the 256-bit elliptic curve secp256k1. Due to the low circuit depth, a fast superconducting computer would recover private keys in minutes. I'm grateful to have joined as a late paper co-author, in large part for the chance to interact with experts and the alpha gleaned from internal discussions. The second paper is by a stealthy startup called Oratomic, with ex-Google and prominent Caltech faculty. Their starting point is Google's improvements to the logical quantum circuit. They then apply improvements at the physical layer, with tricks specific to neutral atom quantum computers. The result estimates that 26,000 atomic qubits are sufficient to break 256-bit elliptic curve signatures. This would be roughly a 40x improvement in physical qubit count over previous state-of-the-art. On the flip side, a single Shor run would take ~10 days due to the relatively slow speed of neutral atoms. Below are my key takeaways. As a disclaimer, I am not a quantum expert. Time is needed for the results to be properly vetted. Based on my interactions with the team, I have faith the Google Quantum AI results are conservative. The Oratomic paper is much harder for me to assess, especially because of the use of more exotic qLDPC codes. I will take it with a grain of salt until the dust settles. → q-day: My confidence in q-day by 2032 has shot up significantly. IMO there's at least a 10% chance that by 2032 a quantum computer recovers a secp256k1 ECDSA private key from an exposed public key. While a cryptographically-relevant quantum computer (CRQC) before 2030 still feels unlikely, now is undoubtedly the time to start preparing. → censorship: The Google paper uses a zero-knowledge (ZK) proof to demonstrate the algorithm's existence without leaking actual optimisations. From now on, assume state-of-the-art algorithms will be censored. There may be self-censorship for moral or commercial reasons, or because of government pressure. A blackout in academic publications would be a tell-tale sign. → cracking time: A superconducting quantum computer, the type Google is building, could crack keys in minutes. This is because the optimised quantum circuit is just 100M Toffoli gates, which is surprisingly shallow. (Toffoli gates are hard because they require production of so-called "magic states".) Toffoli gates would consume ~10 microseconds on a superconducting platform, totalling ~1,000 sec of Shor runtime. → latency optimisations: Two latency optimisations bring key cracking time to single-digit minutes. The first parallelises computation across quantum devices. The second involves feeding the pubkey to the quantum computer mid-flight, after a generic setup phase. → fast- and slow-clock: At first approximation there are two families of quantum computers. The fast-clock flavour, which includes superconducting and photonic architectures, runs at roughly 100 kHz. The slow-clock flavour, which includes trapped ion and neutral atom architectures, runs roughly 1,000x slower (~100 Hz, or ~1 week to crack a single key). → qubit count: The size-optimised variant of the algorithm runs on 1,200 logical qubits. On a superconducting computer with surface code error correction that's roughly 500K physical qubits, a 400:1 physical-to-logical ratio. The surface code is conservative, assuming only four-way nearest-neighbour grid connectivity. It was demonstrated last year by Google on a real quantum computer. → future gains: Low-hanging fruit is still being picked, with at least one of the Google optimisations resulting from a surprisingly simple observation. Interestingly, AI was not (yet!) tasked to find optimisations. This was also the first time authors such as Craig Gidney attacked elliptic curves (as opposed to RSA). Shor logical qubit count could plausibly go under 1K soonish. → error correction: The physical-to-logical ratio for superconducting computers could go under 100:1. For superconducting computers that would be mean ~100K physical qubits for a CRQC, two orders of magnitude away from state of the art. Neutral atoms quantum computers are amenable to error correcting codes other than the surface code. While much slower to run, they can bring down the physical to logical qubit ratio closer to 10:1. → Bitcoin PoW: Commercially-viable Bitcoin PoW via Grover's algorithm is not happening any time soon. We're talking decades, possibly centuries away. This observation should help focus the discussion on ECDSA and Schnorr. (Side note: as unofficial Bitcoin security researcher, I still believe Bitcoin PoW is cooked due to the dwindling security budget.) → team quality: The folks at Google Quantum AI are the real deal. Craig Gidney (@CraigGidney) is arguably the world's top quantum circuit optimisooor. Just last year he squeezed 10x out of Shor for RSA, bringing the physical qubit count down from 10M to 1M. Special thanks to the Google team for patiently answering all my newb questions with detailed, fact-based answers. I was expecting some hype, but found none.
English
318
1.2K
5.8K
1.5M