Ryan Mickler

1.5K posts

Ryan Mickler banner
Ryan Mickler

Ryan Mickler

@ryanmickler

pathetic dreamer & platypode - somewhere between Somerville, MA & Melbourne, VIC.

Melbourne, Victoria Katılım Haziran 2009
578 Takip Edilen158 Takipçiler
Commie Tommy
Commie Tommy@correnterosso·
Whatever happened to autonomist marxism
English
7
0
25
2.6K
Ryan Mickler
Ryan Mickler@ryanmickler·
how did the world function before the invention of cable ties?
English
0
0
0
188
Ryan Mickler retweetledi
Jules
Jules@analytichegel·
TIL Laurent Lafforgue who won the fields medal alongside Vladimir Voevodsky is now employed by Huawei to do topos theory for jpegs
Jules tweet media
English
10
38
455
30.6K
Ryan Mickler retweetledi
Akin Unver
Akin Unver@AkinUnver·
My new obsession: Ottoman-era data visualizations from Cerîde-i Adliyye, “The Justice Gazette,” a Ministry of Justice publication printed in Türkiye in the mid-1920s casualarchivist.substack.com/p/poetic-justi…
Akin Unver tweet mediaAkin Unver tweet mediaAkin Unver tweet mediaAkin Unver tweet media
Română
65
1.3K
7.8K
562.8K
Ryan Mickler retweetledi
Math, Inc.
Math, Inc.@mathematics_inc·
We are pleased to share that using Gauss, we have completed a ~200K LOC formalization of Maryna Viazovska’s 2022 Fields Medal theorems on optimal sphere packing in dimensions 8 and 24. This is the only Fields Medal-winning result from this century to be completely formalized, and is the largest single-purpose Lean formalization in history. We are honored to have assisted @SidharthHarihar1 and the rest of the sphere packing team in this achievement. math.inc/sphere-packing
English
45
340
2.3K
408.4K
Ryan Mickler retweetledi
Daniel Litt
Daniel Litt@littmath·
Some thoughts on AI and mathematics, inspired by "First Proof."
Daniel Litt tweet media
English
48
200
1.1K
334.3K
Ryan Mickler retweetledi
diana
diana@dianalokada·
all hot guys have one thing in common they all like mathematics
English
457
350
5.1K
897.4K
Ryan Mickler retweetledi
Anthony Bonato
Anthony Bonato@Anthony_Bonato·
Cleo is mother
Anthony Bonato tweet media
English
15
33
1.1K
87.9K
Ryan Mickler retweetledi
Adam Brown
Adam Brown@A_G_I_Joe·
New paper out today, proving a novel theorem in algebraic geometry with an internal math-specialized version of Gemini. This was a collaboration between @GoogleDeepMind (Professor Freddie Manners and @GSalafatinos, hosted by the Blueshift team) and Professors Jim Bryan, Balazs Elek, and Ravi Vakil. arxiv.org/abs/2601.07222
English
55
328
1.7K
573.4K
Ryan Mickler
Ryan Mickler@ryanmickler·
Kontorovich - "The Shape of Math To Come" 'This paper will discuss a vision for what research mathematics may look like in the age we now seem to be entering, of AI and formalization.' arxiv.org/pdf/2510.15924
English
0
0
0
201
Ryan Mickler retweetledi
Sebastien Bubeck
Sebastien Bubeck@SebastienBubeck·
My posts last week created a lot of unnecessary confusion*, so today I would like to do a deep dive on one example to explain why I was so excited. In short, it’s not about AIs discovering new results on their own, but rather how tools like GPT-5 can help researchers navigate, connect, and understand our existing body of knowledge in ways that were never possible before (or at least much much more time consuming). Note that I did not pick the most impressive example (we will discuss that one at a later time), but rather one that illustrates many points at play that might have eluded people who see literature search as an embarrassingly trivial activity. Meet Erdős' problem #1043 erdosproblems.com/forum/thread/1…. This problem appeared in a paper by Erdős, Herzog, and Piranian in 1958 [EHP58]. It asks the following beautiful question: consider a set in the complex plane defined by being the pre-image of the unit ball under a complex polynomial with leading coefficient 1. Is there at least one direction in which the width of this set is smaller than 2? (2 is of course the best one can hope for, if the polynomial is a monomial then this set is the unit ball and so the width is 2 in all directions.) This problem didn't stand for very long: just three years later, Pommerenke wrote a paper [Po61] solving problem #1043 (with a counterexample), and that's what GPT-5 surfaced when asked this question. So what's the big deal? Well, a couple of things: 1) [EHP58] does not contain a single problem, but in fact sixteen. [Po61] says in the introduction that it will solve a few problems from [EHP58] but does NOT discuss problem #1043. In fact my understanding is that experts (at least in combinatorics) who knew both about [Po61] and problem #1043 did not know that the solution to the latter could be found in the former. This is quite clear on erdosproblems.com itself since problems (1038, 1039, 1045, 1047) all have a reference to [Po61], yet #1043 was not listed as having any connection to [Po61]. Another evidence that this had been at least partially forgotten is that on Mathscinet (MR0151580) the review of [Po61] attempts to give all the problems that are solved there and does not mention #1043 either. 2) The solution to #1043 can actually be found in the middle of the paper, sandwiched between the proof of Theorem 6 and the statement Theorem 7, as an off-hand comment, see picture. To find this you need to know this paper really well, and read it fully and carefully. I'm sure many people in the 1960s knew about it, but it seems like 60 years later there is a much smaller set of people that were aware of this brief comment in the middle of a 1961 paper. That's where the power of a "super-human search" lies, and this is way way beyond any search index capability (obviously; in fact it’s beyond the capabilities of the previous generation of LLMs). You need to read and understand the paper. 3) But there is more: the paper says that the proof follows by invoking [10, p. 73]. This is very important, because in math it's not so much about the result itself but rather about the understanding that comes with it (and with its proof). So what is [10]? Well it's the previous paper by the author, which was written in German ... and here again something truly accelerating happens: GPT-5 translated the paper and explained the proof in modern language. I believe that this is indeed very much accelerating. This is just one example, and each example has its own interesting story. I have seen similar moments where GPT-5 makes connections between very different fields, where the same results were proven in completely different languages (e.g., game theory versus high-dimensional geometry), sometimes 20 years apart. This is not about AI discovering new knowledge, this is about AI making all of the scientific literature come ALIVE — linking proofs, translations, and partially forgotten results so existing ideas can be understood and built upon more easily. When that happens, science moves forward with greater context and continuity. In my view it's a game changer for the scientific community. *About the confusion, which I again apologize for, I made three mistakes: i) I assumed full context from the reader, in the sense that I was quoting a tweet that was itself quoting my tweet from October 11, and that latter tweet was clearly stating that this is only about literature search; but it is totally understandable that this nested quoting could lead to lots of misreadings and I should have realized that. ii) The original (deleted) tweet was seriously lacking content, and this is probably the biggest problem. By trying to tell a complex story in just a few characters I missed the mark. I will not do that again, and rather, like I have always done, explain as many details as I can. This is vital given the stakes of the AI debate at the moment. iii) When I said in the October 11 tweet that “it solved [a problem] by realizing that it had actually been solved 20 years ago”, this was obviously meant as tongue-in-cheek. However, I now recognize that this moment calls for a more serious tone.
Sebastien Bubeck tweet media
English
189
223
1.7K
617K
Ryan Mickler retweetledi
Martin_DeVido
Martin_DeVido@d33v33d0·
A piece visualizing how Opus collapses from probability distribution into chosen words: "Probability Cascade"
Martin_DeVido tweet media
English
7
13
222
19.7K
Ryan Mickler retweetledi
Paata Ivanisvili
Paata Ivanisvili@PI010101·
GPT-5 Pro found a counterexample to the NICD-with-erasures majority optimality (Simons list, p.25). simons.berkeley.edu/sites/default/… At p=0.4, n=5, f(x) = sign(x_1-3x_2+x_3-x_4+3x_5) gives E|f(x)|=0.43024 vs best majority 0.42904.
Paata Ivanisvili tweet mediaPaata Ivanisvili tweet media
English
55
212
1.5K
772.2K
Ryan Mickler
Ryan Mickler@ryanmickler·
We used to be a civilisation…
Ryan Mickler tweet media
English
0
0
0
68
Ryan Mickler retweetledi
Math, Inc.
Math, Inc.@mathematics_inc·
Today we're announcing Gauss, our first autoformalization agent that just completed Terry Tao & Alex Kontorovich's Strong Prime Number Theorem project in 3 weeks—an effort that took human experts 18+ months of partial progress.
English
81
475
3K
1.3M