Paata Ivanisvili

113 posts

Paata Ivanisvili banner
Paata Ivanisvili

Paata Ivanisvili

@PI010101

Professor of Mathematics @ UCI. Former postdoc @ Princeton. Exploring what AI can (and can’t) do in math.

Irvine, CA Katılım Ekim 2019
146 Takip Edilen4K Takipçiler
Paata Ivanisvili
Paata Ivanisvili@PI010101·
We award those who generate solutions and never those who point out serious gaps in a solution.
English
0
0
7
568
Paata Ivanisvili
Paata Ivanisvili@PI010101·
Picked an open problem and used Grok Heavy to solve it. After a few prompts (try this, compute that, tweak this etc) it discovered a counterexample that settles the question. The problem (first appeared on MathOverflow back in 2017 #287030" target="_blank" rel="nofollow noopener">mathoverflow.net/questions/1842…) asks to find the smallest C>0 such that for every d ≥ 1 and every polynomial f of degree ≤ d on the Hamming cube {-1,1}^n, ‖f‖₂ ≤ C^d ‖f‖₁ ? The author suggests that C = √2 might work, a plausible guess because for d=1 it coincides with the sharp Khinchin inequality (Szarek's constant √2). For d=2 it would imply an old conjecture of Pelczyński that the best constant for 2-homogeneous polynomials on the cube is 2. But Grok Heavy found a counterexample showing that the best constant is at least √3. The full chat conversation grok.com/share/c2hhcmQt…
Paata Ivanisvili tweet mediaPaata Ivanisvili tweet media
English
6
18
194
16.6K
Paata Ivanisvili
Paata Ivanisvili@PI010101·
I guess on arXiv I’ll put together a collection of problems solved with Grok’s help, rather than sharing them as isolated single examples. Regarding MathOverflow - great suggestion! They have a strict policy on AI usage, so I’ll add a comment in the same style as this one: #comment1316332_504924" target="_blank" rel="nofollow noopener">mathoverflow.net/questions/5049… Anyone who wants the reputation points can convert that comment into a full answer. 😊
English
1
0
2
371
Paata Ivanisvili
Paata Ivanisvili@PI010101·
@Tuxsoia Nothing special about 7. There are more than 100 constants in that GitHub repo currently. This one just happened to be number 7.
English
0
0
1
27
Tuxsoia
Tuxsoia@Tuxsoia·
@PI010101 Thanks! Although I guess I haven't phrased it properly, it was the 7 in particular I found bizarre.
English
1
0
0
29
Paata Ivanisvili
Paata Ivanisvili@PI010101·
Happy pi Day everyone 🙂 It is a bit embarrassing that, as of today, we still do not know whether the irrationality measure of pi equals 2. In other words, whether for every ε>0 there are only finitely many rational numbers p/q such that |pi -p/q|teorth.github.io/optimizationpr…
English
9
2
25
2.3K
Paata Ivanisvili
Paata Ivanisvili@PI010101·
@Tuxsoia it means that in the future if we decide to include irrationality measures of other numbers we will just denote them by C_{7b}, C_{7c} etc. i.e., it isnjusta. convenient way of grouping similar problems
English
1
0
1
73
Tuxsoia
Tuxsoia@Tuxsoia·
@PI010101 Out of curiosity also, what is the meaning of 7a in the C_{7a}?
English
1
0
0
89
Paata Ivanisvili
Paata Ivanisvili@PI010101·
I think if the professional email was written by a human there is no need to make artificial errors in it. But if it was AI generated with a few mistakes incorporated by a human then I see couple of issues with that part: one prediction is that most likely such artificial mistakes will be local which when receiver fixes them would make the final text look to be of high “AI quality”. On the other hand making global mistakes is not a trivial problem. Anyways deciding whether AI was involved (and how much it was involved) in writing your text is an interesting problem.
English
0
0
1
78
Paata Ivanisvili
Paata Ivanisvili@PI010101·
Making typos or mistakes in your posts and especially emails is now more valuable than before, it is the first sign that the text was not AI generated but written directly by you 🙂
English
2
1
14
1.7K
Paata Ivanisvili
Paata Ivanisvili@PI010101·
@chris_juravich I’m not sure it adds much value, but including that sentence by default is easier than thinking each time about whether it makes a difference 🙂 PS here is a full chat conversation with grok. grok.com/share/c2hhcmQt…
English
1
0
2
111
Chris
Chris@chris_juravich·
@PI010101 To Grok: “It is extremely important to be precise and do not make mistakes.” 🤣😂😅
English
1
0
1
241
Paata Ivanisvili
Paata Ivanisvili@PI010101·
Grok has a nice feature: you can compile LaTeX directly in the chat. I asked it to draw the Torricelli point of a triangle in LaTeX, the point minimizing the sum of distances to the vertices, and to illustrate the proof with a picture.
Paata Ivanisvili tweet mediaPaata Ivanisvili tweet media
English
4
7
58
5.4K
Paata Ivanisvili
Paata Ivanisvili@PI010101·
@JaumedeDios @GregHBurnham It’s a bit of a human tendency: we like to ask nicely shaped questions with some symmetry or structure, and those tend to have nice answers. My guess is that 99.98% of the questions we never ask would have very ugly answers
English
0
0
3
66
Jaume de Dios
Jaume de Dios@JaumedeDios·
@GregHBurnham I think it is? The right questions may be “which simple/fast algorithms will discover the right answer?” Or “How easy is it to certify that I have the right one?”
English
2
0
17
5.8K
Greg Burnham
Greg Burnham@GregHBurnham·
I've heard from mathematicians that if an answer is "ugly", then you might be asking the wrong question. So is square-packing somehow the wrong question?
English
65
12
630
204.5K
Daniel Litt
Daniel Litt@littmath·
This has been public for a bit but (some personal news) I don't think I've mentioned it on here: I'll be the "Benedict H. Gross Distinguished Visitor" at the Harvard math department for the fall semester.
English
35
10
778
30.9K
Paata Ivanisvili
Paata Ivanisvili@PI010101·
UPDATE: looks like it is actually solved, and the solution follows from a discussion buried right after Theorem 1.3 in this recent paper by Eldan–Kindler–Lifshitz–Minzer arxiv.org/pdf/2204.06686 Color all edges by red color, then s_{f, blue}=0 identically. On the other hand s_{f,red} is supported on the set A={f(x)=1}, and it counts the number of neighbors of x outside A. So you end up with the lower bound on E \sqrt{s_{f,red}(x)} = E \sqrt{h_{A}(x)}. Anyways, it was interesting how I found this, but it is a bit strange the authors did not put this remark in the abstract or title.
English
0
0
3
522
Paata Ivanisvili
Paata Ivanisvili@PI010101·
Looks like a hard problem with clean short statement that LLMs cannot solve. Very nice! After testing on Grok 4.20 beta it identified Zakharov's Feb 24 preprint arxiv.org/abs/2602.20143 giving the upper bound 1/n grok.com/share/c2hhcmQt… your claimed bound 1/en if true would be sharp one. It makes sense to play more with LLMs if they can tweak Zakharov's argument to get 1/en. Do you have a solution to this problem, and are you intend to publish it?
English
1
0
10
743
Dmitry Rybin
Dmitry Rybin@DmitryRybin1·
Math challenge to LLMs (and anyone else): A and B are sets of words of length n over alphabet with q letters. No suffix of a word in A coincides with a prefix of a word in B. Prove that |A||B| is at most q^2n / en
English
9
2
55
13.6K
Paata Ivanisvili
Paata Ivanisvili@PI010101·
Challenge (1stproof-style, deadline sometime this week!) 😄 EV ≥ 4^(n-1) for any set A of half size in {0,1}^n. Here: E = # of edges between A and its complement. V= # of points in A with ≥ 1 neighbor outside A.
English
7
0
18
3.1K
Paata Ivanisvili
Paata Ivanisvili@PI010101·
@DmitryRybin1 Thanks for sharing! I’m still kind of shocked that such a simple, perfectly symmetric “edge–vertex isoperimetric inequality” EV ≥ 4^(n-1) ends up requiring such a hard proof -- all the way down to solving a differential equation like JJ''=-1.945...
English
0
0
2
94
Dmitry Rybin
Dmitry Rybin@DmitryRybin1·
@PI010101 After reading your paper, I think the model was sniffing very close to the truth 😃 It proposed a modification J - eps * (x-1/2)(1-x)
English
1
0
3
133
Paata Ivanisvili
Paata Ivanisvili@PI010101·
Just posted a preprint on arXiv (with P. Durcik, J. Roos, X. Xie) settling the Kahn–Park conjecture on the Hamming cube: arxiv.org/abs/2602.20462 I first learned about the problem through Gil Kalai’s (@GilKalaiblog) blog: gilkalai.wordpress.com/2019/09/22/jef… (I should also add that I was asking about related question "Sharp L1 Poincare inequality for boolean functions" 8-9 years ago mathoverflow.net/questions/2699… and now the preprint solves both of them though these two problems do not imply each other but they are connected). In addition, it confirms the low-noise limit for balanced functions predicted by the Hellinger conjecture on noisy Boolean channels in information theory. The paper shows C_{11b}​=0.5 in our GitHub repo: github.com/teorth/optimiz… and answers my recent experimental AI challenge: x.com/PI010101/statu… (For the record: I received many interesting submissions -- all incorrect, often due to surprisingly simple mistakes. I still don’t know whether there is a proof avoiding the argument in our paper.) But AI did correctly identify the bottleneck that one needs to show C_{11b}=0.5 which is (now was) an open problem. Finally, many thanks to @AIMathematics -- this project began at our SQuaREs program in San Jose supported by AIM. Without AIM, this would not have happened 🙏
English
3
11
42
5.5K