Hadas Zeilberger

116 posts

Hadas Zeilberger

Hadas Zeilberger

@idocryptography

Cryptography PhD student @YaleACL, https://t.co/PTQFUs4The bluesky: @hadaszeilberger.bsky.social

New Haven, Conneticut Entrou em Mart 2022
71 Seguindo387 Seguidores
Tweet fixado
Hadas Zeilberger
Hadas Zeilberger@idocryptography·
Very excited to introduce Khatam (eprint 2024/1843): a new Proximity Gaps result for Multilinear Polynomial Commitment Schemes. Not only does it reduce the size of Basefold (including over Random Foldable Codes), but it also improves Blaze, WHIR, Ligero, and others. 🧵(1/x)
English
2
12
68
11.7K
Hadas Zeilberger
Hadas Zeilberger@idocryptography·
7/7 Real-time, trustless AI execution requires breaking away from standard cryptographic pipelines. We are building the custom stack to make it a reality. Stay tuned for more from Ritual Research. ⚡
English
0
0
4
73
Hadas Zeilberger
Hadas Zeilberger@idocryptography·
6/7 Enter Cascade. For privacy-preserving inference where MPC/FHE latency is a blocker, we use token-level sharding. Instead of secret sharing, we distribute obfuscated prompt fragments across nodes for statistical privacy that runs 100x faster with 150x less bandwidth.
English
1
0
3
86
Hadas Zeilberger
Hadas Zeilberger@idocryptography·
1/7 At Ritual, we're building "super smart contracts" capable of arbitrary, cryptographically secured on-chain compute. Our endgame: real-time proving for the largest, most complex circuits (like LLMs). How? By hyper-specializing and only considering the tradeoffs we need. 🧵
Ritual@ritualnet

Ritual is a lab for autonomous intelligence. The thesis is organized around what durable machine agency actually requires: emancipation from human control, strong privacy, mech design for compute markets, and consensus rules that can schedule and resurrect agents when they die.

English
1
0
11
622
Hadas Zeilberger retweetou
Alireza Talakoubnejad
Alireza Talakoubnejad@websterkaroon·
Professor Siavash Shahshahani, the head of the Math Department, talks about the damage to Sharif University as a result of an American/Israeli strike. Shahshahani's students included Maryam Mirzakhani. He was a significant figure in developing the internet in Iran in the 90s.
English
7
270
1.2K
56K
Hadas Zeilberger retweetou
╰┈➤ 🇮🇪 𝐁𝗋ó𐓣ƶy 🇮🇪
🔴 Just a wee reminder, if you don't like Iran's Islamic authoritarianism, it exists because the USA overthrew a secular socialist Iran in 1953 because BP was losing oil profits.
English
2
9.9K
54.2K
709K
Hadas Zeilberger retweetou
Aakash Gupta
Aakash Gupta@aakashgupta·
A human consumes about 2,000 calories per day. Over 20 years, that’s roughly 17,000 kWh of total food energy. Training GPT-4 consumed an estimated 50 GWh of electricity. That’s 3,000 humans worth of “training energy” for a single model run. And GPT-4 is already dead. OpenAI retired GPT-4o from ChatGPT on February 13th. The model that took 50 GWh to train got less than two years of flagship status before replacement. The human you spent 17,000 kWh “training” for 20 years produces economic output for the next 40 to 60 years. The amortization window on GPT-4 was shorter than a car lease. Now look at what replaced it. GPT-5.2, released December 2025, is OpenAI’s current default. The GPT-5 series consumes an estimated 18 Wh per average query according to the University of Rhode Island’s AI Lab, up to 40 Wh for extended reasoning. That’s 8.6 times more electricity per response than GPT-4. With 2.5 billion queries hitting ChatGPT daily and GPT-5.2 now the default model, the inference math gets staggering fast. Even at a blended average well below 18 Wh, you’re looking at daily electricity consumption that could power over a million American households. This is what Altman is actually doing. OpenAI hit $13 billion in annual recurring revenue but still isn’t profitable. They need you to think of AI energy consumption as natural and inevitable, the same way you think about feeding a child, because the alternative framing is that they’re burning through enough electricity to rival small countries while racing to build 1-gigawatt Stargate data centers. The food analogy makes the energy costs feel biological and unavoidable instead of what they are: an engineering and business choice that scales with every model generation. The comparison sounds clever at a fireside chat in India. It falls apart the second you do the arithmetic.
Chief Nerd@TheChiefNerd

🚨 SAM ALTMAN: “People talk about how much energy it takes to train an AI model … But it also takes a lot of energy to train a human. It takes like 20 years of life and all of the food you eat during that time before you get smart.”

English
416
3.2K
14.1K
1.3M
Hadas Zeilberger
Hadas Zeilberger@idocryptography·
@nasqret The writeup isn't saying what you wrote in your tweet. It's still a very measured view of the future. He's just saying that some amount of automation may be possible soon
English
1
0
2
335
Bartosz Naskręcki
Bartosz Naskręcki@nasqret·
Please spend some time to read this excellent entry by one of the most prominent algebraic geometers of the modern generation. I agree with all the predictions of Daniel. Actually, I am a bit sad, because I thought Daniel's predictions were much more reluctant and conservative than my own. In 5 years we will see that most human mathematicians will not participate in mathematical research as we know it today. Only a few, if any, will remain in the competitive game of designing and proving new theorems. But I want to elaborate on this aspect. What is this game of mathematics really about? I think, to some extent, that we are like artists, designing for other humans an intellectual experience, concocting ideas that can become viral and change other people's lives by showing new paths for thinking. I think I am already on the path between the fourth and fifth stage of grief. I still feel very sad that an epoch in which solely human genius was at the peak of intellectual reign on this planet is ending. So at the end of this post, I want to ask just one question. What will human mathematicians do once AIs surpass humans in their own game? Will we play the game for its own beauty, for the pleasure of understanding, of experiencing the unknown? I think this is the path, and AIs will pursue their own quests, but as long as humanity is still around, we will still perform our own art of thinking, just for the sake of the game. This is our nature.
Daniel Litt@littmath

Some thoughts on AI and mathematics, inspired by "First Proof."

English
17
54
415
60.2K
Hadas Zeilberger retweetou
Edward Ongweso Jr
Edward Ongweso Jr@bigblackjacobin·
we can write a million essays about how the future Silicon Valley wants to build is underwritten by a deep disgust with / contempt for Being A Human, or we can just let them speak for themselves
Chief Nerd@TheChiefNerd

🚨 SAM ALTMAN: “People talk about how much energy it takes to train an AI model … But it also takes a lot of energy to train a human. It takes like 20 years of life and all of the food you eat during that time before you get smart.”

English
69
1.4K
7.3K
227.6K
Hadas Zeilberger retweetou
Jonathan Gorard
Jonathan Gorard@getjonwithit·
Like @davidbessis and others, I think that Hinton is wrong. To explain why, let me tell you a brief story. About a decade ago, in 2017, I developed an automated theorem-proving framework that was ultimately integrated into Mathematica (see: youtube.com/watch?v=mMaid2…) (1/15)
YouTube video
YouTube
vitrupo@vitrupo

Geoffrey Hinton says mathematics is a closed system, so AIs can play it like a game. They can pose problems to themselves, test proofs, and learn from what works, without relying on human examples. “I think AI will get much better at mathematics than people, maybe in the next 10 years or so.”

English
127
472
2.7K
815.5K
Hadas Zeilberger
Hadas Zeilberger@idocryptography·
@levs57 @alrshirzad Look at top of page 3, they don’t do a union bound over the rounds of sumcheck, they analyze it differently
English
1
0
2
69
Hadas Zeilberger retweetou
ulrich.haboeck
ulrich.haboeck@UHaboeck·
Finally out, the proof of mutual correlated agreement for RS codes, up to Johnson bound. I have let it circulate in the community about a year, but never found the time to make it public. For now without the improved bounds from the recent proximity gaps paper - but that will be upgraded soon. eprint.iacr.org/2025/2110
English
2
17
63
4.6K
Hadas Zeilberger
Hadas Zeilberger@idocryptography·
Super cool new work by Ron, Giacomo, Benedikt, and William - making the most efficient polynomial commitment schemes even smaller using tensor codes!
Ron Rothblum@ronrothblum

With the recent exciting flurry of excitement around proximity gaps, it's easy to forget what they’re actually used for - building polynomial commitment schemes (PCS), which are a key backbone of SNARKs. In this new work with the wonderful @benediktbuenz, @GiacomoFenzi and @kleptographic we construct a new PCS based on tensor codes and code-switching that is very close to optimal. ia.cr/2025/2065

English
0
0
3
244
Hadas Zeilberger
Hadas Zeilberger@idocryptography·
@corcoranwill I think you need to replace the first paper title with "On Reed–Solomon Proximity Gaps Conjectures" by Elizabeth Crites and Alistair Stewart (which is the last link)
English
0
0
2
105
Will Corcoran
Will Corcoran@corcoranwill·
The week’s proximity-gap papers: - Guan et al. – Polynomial generators preserve distance up to Johnson eprint.iacr.org/2025/2010.pdf - Diamond & Gruen – Random words disprove “up-to-capacity” conjecture eccc.weizmann.ac.il/report/2025/16… - Kopparty et al. – Stronger Reed–Solomon gaps: O(1/η⁵) exceptions eccc.weizmann.ac.il/report/2025/16… - Goyal & Guruswami – Random RS codes achieve near-optimal gaps eccc.weizmann.ac.il/report/2025/17… - Chatterjee–Harsha–Kumar – Deterministic list-decoding of RS codes eprint.iacr.org/2025/2046.pdf
English
2
3
20
2.5K
Will Corcoran
Will Corcoran@corcoranwill·
Ethproofs call 6 just wrapped! Over 200 joined a deep dive into proximity gaps, the mathematical core of modern hash-based SNARKs. 6 new papers in 6 days reshaped our understanding of these gaps - both breakthroughs and refutations that sharpen what’s provable and what remains open. Links & YT📺 below 👇
Will Corcoran tweet media
English
5
10
48
3.7K
Hadas Zeilberger retweetou
ulrich.haboeck
ulrich.haboeck@UHaboeck·
Sitting on the shoulders of giants, I am glad to announce the following paper with Eli Ben-Sasson, Dan Carmon, Swastik Kopparty, and Shubhangi Saraf: eccc.weizmann.ac.il/report/2025/16… On the one hand, we improve the existing decoder analysis from Ben-Sasson, Carmon, Ishai, Kopparty and Saraf (BCIKS 2020), reducing it to an O(n) soundness error for correlated agreement up to the Johnson radius. In practice, it shows that degree 4 extensions of a 31 bit prime field (like M31, Babybear or Koalabear) are sufficient for FRI up to that radius, in many applications, considering that you are willing to grind. On the other hand, we provide additional counter examples that question the proximity gaps conjecture as written. Notably, over binary fields one cannot expect an O(n) error already *at* Johnson radius, rather a quadratic one. In general, proximity gaps stop at the distance where we have more than field size many proximates, meaning that we have to respect small gap to capacity. (See also the recent work of Crites and Stewart, as well as Diamond and Gruen.)
English
11
24
92
10.2K
Elizabeth Crites
Elizabeth Crites@e1izabethcrites·
Proving mutual correlated agreement up to the Johnson bound is a fail-safe, and likely attainable. However, relying on the Johnson bound would result in significantly increased proof sizes and verification times. Thus, we should aim higher!
English
1
1
18
3.7K
Elizabeth Crites
Elizabeth Crites@e1izabethcrites·
In new work with Alistair Stewart, we disprove proximity gaps and list-decodability conjectures up to capacity. These conjectures underpin the security and efficiency of many deployed SNARKs and are the subject of Ethereum’s “Millennium Prize.” eprint.iacr.org/2025/2046 🧵
Elizabeth Crites tweet media
English
8
32
126
16.5K
Ariel Gabizon
Ariel Gabizon@rel_zeta_tech·
The recent paper on PG conjecture, made me think - also within the Johnson bound don't we have like only 60-70 bits of security from the PG theorem? That is, current projects use an extension field of approx 124 bits. (quartic extension of 31-bit field) The error term has q in denominator and something like n^2 20^7 in the numerator. Taking trace length k=2^20 and say rho= 1/2. We get n=2^21 so the error is something like (2^{21+33})/q ~ 2^{-70} And this would be more like 2^{-60} for k=2^25. That is, the probability of picking bad folding randomness is upper bound by things close to 2^{-70} or 2^{-60}, not e.g. 2^{-100} It's not clear to me how exploitable this is. Off the top of my head, you would need to e.g. grind 2^60, and then list decode to see if there are close codewords to your folded malicious non code-word. and if there are, try to make small changes to reach a codeword in last layer, and then grind the queries to not catch your wrong folds in subsequent layer. So each iteration of grinding you'd need to run Guruswami-Sudan list-decoding alg? Which is heavy, maybe even non-practical? Idk if anyone has experience actually running that alg.
Ariel Gabizon tweet media
English
5
1
33
2.9K