Daniel Litt

31K posts

Daniel Litt banner
Daniel Litt

Daniel Litt

@littmath

Assistant professor (of mathematics) at the University of Toronto. "Tireless math ronin." Algebraic geometry, number theory, etc. He/him.

Toronto, Ontario Katılım Ağustos 2010
913 Takip Edilen56.7K Takipçiler
Sabitlenmiş Tweet
Daniel Litt
Daniel Litt@littmath·
New paper with Josh Lam, about which I'm really excited! I want to try to briefly explain what the point is in this thread.
Daniel Litt tweet media
English
13
32
489
129.4K
Daniel Litt
Daniel Litt@littmath·
Oh, I think I misunderstood your comment about 20-80%; I thought you meant that 20-80% of our enjoyment comes from problem-solving, whereas I think you meant "20-80% of mathematicians are primarily interested in problem-solving." This sounds right to me; maybe even more than 50%. As for the future, I think there are some meaningful differences from undergrad, though the precise equilibrium shape of math research probably depends on the shape of AI capabilities in a way that seems very hard to predict. At least one is this: I think it's likely that many fundamental questions we now consider important will remain open even as AIs become very capable (and in fact, that AIs will help us discover many more fundamental questions). So we will still be occupied in large part by these open questions, rather than primarily trying to understand predigested mathematics.
English
0
0
3
153
jacob tsimerman
jacob tsimerman@Jacob_Tsimerman·
Re: problem-solving, I agree that we have a different experience [I enjoy it and you don't]. I'm curious whether you actually disagree with my 20-80% estimate? I would be curious what your picture of continuing to do mathematics in an AI-is-better-than-us world looks like, and whether its meaningfully different than being an undergraduate mathematician and digesting tons of math [I don't say this derogatorily, I loved undergrad!]. Asking someone to spell out a vision of the future is a big ask, so even-less-pressure-than-usual to actually do this.
English
2
0
4
194
Daniel Litt
Daniel Litt@littmath·
This is a characteristically thoughtful and coherent account of mathematics from my colleague Jacob, and I agree with much of what he writes. But I want to push back on some aspects, which don't accord with my experience of or motivation for doing mathematics. Problem-solving I fully agree with Jacob that, as currently practiced, problem-solving is a fundamental aspect of doing mathematics; like Jacob, I identify as a "problem-solver" more than a "theory-builder." (A related axis: I identify more as a "frog" than a "bird.") Why do we solve problems? For some of us, it's more or less about enjoyment. That is NOT why I solve problems. I enjoy parts of that process: getting the solution, some little moments of understanding along the way. But my primary emotional experience of problem-solving is not fun: it's frustration. I try to understand something and get confused and I HATE that feeling, and need to resolve it. For a while my bio on here read "forever confused" -- that's not an exaggeration. I think the main reason I (and many other mathematicians) solve problems is that it's the only way we know how to ground ourselves in mathematical truth. Without solving problems and working out examples, our work inevitably devolves into bullshit. The activity of mathematics So is 80%+ of mathematics about problem-solving? I think this is a coherent account of mathematics but it's not my experience. Like Jacob and many other mathematicians my work is indeed guided by some big problems: for me, the Grothendieck-Katz p-curvature conjecture, some questions about mapping class groups, some questions about fundamental groups of algebraic varieties. Many of these problems have occupied me for a decade+ now. My experience of thinking about these problems is, perhaps paradoxically, not about "problem-solving." Rather, these problems benchmark our failure to understand certain fundamental phenomena: differential equations, surfaces, polynomials. It's useful to have rigorously stated problems like this to guide the field, but I think they have relatively little influence on my day-to-day work. That looks more like: trying to identify the most basic situation in which our understanding fails, and develop it in that basic situation. In this model, problem-solving is secondary: my typical experience is that I think I understand something new, often non-rigorously, and then try to operationalize it to solve some problems both to test the correctness of this understanding, and to measure its effectiveness. It's not uncommon in this model for a problem and its solution to appear at the exact same time. In fact, for me, it's somewhat unusual to write down a rigorous statement of a lemma that I do not already know how to prove, though this does of course happen. Oracles Jacob proposes the a thought experiment, where one has access to an AI oracle that can solve rigorously-stated problems better than humans but has less capability in other areas of the mathematical process. Like him, I do not expect this to be the long-term situation--eventually I expect AI mathematics to exceed humans in every mathematical capability--but let's run with it for a second. What would mathematical activity look like with such an oracle? Jacob writes: "Well, you make a definition, and want to know if it’s the right one. You immediately ask your oracle a thousand questions. From “are these basic properties true” to “ooh, so is this deep conjecture true?” and start getting back answers, and amending your definitions. You could invent and resolve entire research directions in days. But the confusion you would have had to push through to flesh out your theory would largely (probably not entirely) be instantly resolved and the whole process sped up tremendously by your oracle. A big part of the process would be gone." I think this is where I most strongly disagree with what he writes. I think you start getting back answers, and then to continue, you have to UNDERSTAND them. And the dirty little secret of mathematics is that it's impossible to understand what anyone else is saying. Conveying one's mathematical intuition is incredibly hard: at least for me, the experience of acquiring understanding from someone else's work is nearly identical to that of discovering it on my own. Of course, what the mathematics of the future will look like depends (like all AI prognostication) on the precise shape of future AI capabilities; I do not think the picture of an uncreative oracle is realistic. I expect future AI mathematicians to be creative, and also, not to be oracles. I think a lot of the questions we view as fundamental will remain open for some time. Basic mathematical questions can be arbitrarily hard! And we will still want to understand them. Doing math Most of what I love about the practice of mathematics is: talking to colleagues about math, learning and understanding new things, developing intuition and resolving confusion, etc. My sense is that these parts of math survive with arbitrarily capable AI tools. I also like a lot of other aspects of the job: I get paid and can afford to eat, I have a lot of intellectual freedom, I have great colleagues (like Jacob), I don't have a boss and can work sprawled out on a couch. Absent a real attempt for the profession to adapt to the coming changes, it's possible that the shape of the profession changes in a way that makes it much less enjoyable, even as most of what I like about doing math survives. There are questions as to why society should support human mathematicians if and when machines have absolute advantage over us in all aspects of mathematics. I think we'll have advantage in some aspects of mathematics for some time, but it's worth thinking about this endpoing for the profession, as it is for all other professions. That said, I think there's a future here where we continue to ask basic questions about fundamental mathematical phenomema. Sometimes we get an answer from a machine, and sometimes the machine gets stuck, and so do we. And when we get stuck, we get frustrated--we get an itch--and we don't give up.
jacob tsimerman@Jacob_Tsimerman

I want to clarify my thoughts on problem-solving in mathematics, and the potential consequences of AI for the field. For context, I’m quoting here my post in reply to Daniel Litt (who, echoing others, I find very clear, grounded, and insightful in his thinking). The claim The short version is that I think problem-solving is an immense, and pervasive part of modern mathematical research. Consequently, if human problem-solving disappears by virtue of the AIs becoming strictly and substantially better at it, then most of the time currently spent by modern mathematical researchers will have to be spent on an activity that is altogether pretty different. Whether such an activity is viable as a professional endeavour is something I am unsure of, but strongly encourage others to think about and try to envision, so that if/when the time comes, we can steer such a future into being. Allow me to make this somewhat concrete: by problem-solving I mean questions of the form “is T true? If so find a proof. If not, find a disproof.” where T is a precise mathematical statement. I’ll also include “find an example of S, if there is one” where S is some structure (variety/category/property/isomorphism/….). The argument Ok. Now as I said (and some have echoed) I spend ~all of my time problem-solving as my primary goal. This has sub-goals, but my entire main research field disappears if someone solves the Zilber-Pink Conjecture in its more general form. This is a single conjecture (precisely stated!) and lots of mathematicians, postdocs, and graduate students are engaged in picking apart special cases of it, trying strategies, finding analogies to develop intuition, etc.. Of course, lots of motivation and intuition and analogizing and understanding have gone into deciding to make the ZP conjecture a focus! But the fact remains that this is now what is being worked on ~all of the time by this community. This is true of many mathematicians. They have a problem (or ten) and spend most of their time doing it. If someone solves it, they have to find a different problem. This can be a big, disorienting process involving a lot of energy, and is neither trivial nor always fun (though often rewarding in the end). People have written a lot about Theory building vs. Problem-solving, and I want to first of all clarify I have nothing against theory building or theory builders! It is a valuable part of mathematics, and while there are differences in perspective between the “camps” there is way more mutual respect and agreement. However, I gather there is a perception that theory-builders spend most of their time not-problem-solving, and I think this is largely untrue. Now I’m not a theory-builder primarily (though I’ve partaken a LITTLE BIT by necessity) so I am outside of my comfort zone. As such, I apologize for mistakes and welcome corrections! But theory-building constantly runs through problem-solving. Let’s say you want to define the right notion of a cohomology theory. Of course you must make candidate definitions. But then what does it mean for it to be the right one? Well, you start asking if it has natural properties. These are T statements. Does it satisfy a Kunneth formula? Is it functorial in the right way? When you have the wrong one you have to find the properties it’s missing, and when you have the right one you have to prove that it indeed has those properties. Again, I am not saying nor do I believe that this makes problem-solving “real math” and theory-building lesser. I am just trying to draw attention to the way I think research mathematicians operate, and mathematics is practiced. To put all this a different way, imagine you had access to an AI oracle that could resolve statements T, but somehow lacked any creativity to build technology or make definitions (I think this is unlikely, but for the purpose of this thought experiment lets imagine it). How would your mathematics change, if you were a theory builder? Well, you make a definition, and want to know if it’s the right one. You immediately ask your oracle a thousand questions. From “are these basic properties true” to “ooh, so is this deep conjecture true?” and start getting back answers, and amending your definitions. You could invent and resolve entire research directions in days. But the confusion you would have had to push through to flesh out your theory would largely (probably not entirely) be instantly resolved and the whole process sped up tremendously by your oracle. A big part of the process would be gone. This is very very different to modern mathematics. One more thought This post is too long already, but I’ve seen some people say that they only do mathematics to find truth and others valourize that as the only virtuous way to be. I do not do mathematics only to find truth. I do it largely because I enjoy it and I am good at it. I also find it beautiful and am grateful I get to spend my days understanding beautiful things. But I enjoy the challenge, the process, resolving confusions, finding strategies, grappling with problems. I would like to push for this being de-stigmatized. Mathematicians are people who need money, housing, food, love, exercise, and a great deal of other stuff including various forms of meaning. There are many people whose primary enjoyment of math comes through problem solving in one of its incarnations. If that disappears, that is not a trivial issue and many of them might not want to do it anymore (even if there were some way to proceed).

English
16
23
189
26.9K
Daniel Litt
Daniel Litt@littmath·
Re: theory-building, I think the main disagreement is that I don't see the theory-building vs. problem-solving distinction as too relevant to the underlying question about capabilities; the initial comment I think you were reacting to was just a throw-away thought on current capabilities. I think we probably agree that AIs doing high-quality mathematics does not necessarily remove the step where humans (presumably) still want to understand what's going on. Re: enjoyment, I think we do really disagree! What I was trying to communicate is that I do not really enjoy problem-solving. Rather I find it a very uncomfortable experience and much of my motivation to solve problems comes from trying to remove that discomfort. Because of this I find it plausible that I will enjoy math *more* if AI's do the bulk of the problem solving! I/we will still have to work to develop some intuition for and understanding of what's going on, but hopefully with much less frustration.
English
1
0
10
595
jacob tsimerman
jacob tsimerman@Jacob_Tsimerman·
Thanks for the reply! I like your account; we differ in philosophy though I'm not sure whether it amounts to much difference in our process. I'm trying to get a better sense of where we strongly disagree. Is it that I didn't put enough emphasis in my thought experiment on understanding what the oracle communicates back? I agree with you that this step exists, is necessary, and is very enjoyable - perhaps the most enjoyable part! I was mainly arguing that the oracle removes the primary bottlenecks (as they currently exist) for theory-building in the same way that it does for problem-solving. I'm curious if you agree with this. I was also trying to communicate that I think problem-solving is most of the enjoyment of math for many mathematicians [but not all! 20-80% probably]. I think you agree with this also, but am not sure?
English
1
0
17
640
Daniel Litt
Daniel Litt@littmath·
@lacker @blueblimpms @samth I think this sounds pretty cool! Maybe analogous to astronomy, where committees meet to decide where to focus radio telescopes etc.
English
1
0
2
54
Kevin Lacker
Kevin Lacker@lacker·
@blueblimpms @samth @littmath (I think in practice there will be another category, where if we spend X million dollars on computing, the AI will be able to solve additional math problems. To me this is very exciting but I can understand traditional mathematicians not being so into it!)
English
1
0
2
54
Daniel Litt retweetledi
💙eren 𐔌˙. 💙
💙eren 𐔌˙. 💙@erenspace_·
“the dirty little secret of mathematics is that it's impossible to understand what anyone else is saying. Conveying one's mathematical intuition is incredibly hard […] acquiring understanding from someone else's work is nearly identical to […] discovering it on my own.”
Daniel Litt@littmath

This is a characteristically thoughtful and coherent account of mathematics from my colleague Jacob, and I agree with much of what he writes. But I want to push back on some aspects, which don't accord with my experience of or motivation for doing mathematics. Problem-solving I fully agree with Jacob that, as currently practiced, problem-solving is a fundamental aspect of doing mathematics; like Jacob, I identify as a "problem-solver" more than a "theory-builder." (A related axis: I identify more as a "frog" than a "bird.") Why do we solve problems? For some of us, it's more or less about enjoyment. That is NOT why I solve problems. I enjoy parts of that process: getting the solution, some little moments of understanding along the way. But my primary emotional experience of problem-solving is not fun: it's frustration. I try to understand something and get confused and I HATE that feeling, and need to resolve it. For a while my bio on here read "forever confused" -- that's not an exaggeration. I think the main reason I (and many other mathematicians) solve problems is that it's the only way we know how to ground ourselves in mathematical truth. Without solving problems and working out examples, our work inevitably devolves into bullshit. The activity of mathematics So is 80%+ of mathematics about problem-solving? I think this is a coherent account of mathematics but it's not my experience. Like Jacob and many other mathematicians my work is indeed guided by some big problems: for me, the Grothendieck-Katz p-curvature conjecture, some questions about mapping class groups, some questions about fundamental groups of algebraic varieties. Many of these problems have occupied me for a decade+ now. My experience of thinking about these problems is, perhaps paradoxically, not about "problem-solving." Rather, these problems benchmark our failure to understand certain fundamental phenomena: differential equations, surfaces, polynomials. It's useful to have rigorously stated problems like this to guide the field, but I think they have relatively little influence on my day-to-day work. That looks more like: trying to identify the most basic situation in which our understanding fails, and develop it in that basic situation. In this model, problem-solving is secondary: my typical experience is that I think I understand something new, often non-rigorously, and then try to operationalize it to solve some problems both to test the correctness of this understanding, and to measure its effectiveness. It's not uncommon in this model for a problem and its solution to appear at the exact same time. In fact, for me, it's somewhat unusual to write down a rigorous statement of a lemma that I do not already know how to prove, though this does of course happen. Oracles Jacob proposes the a thought experiment, where one has access to an AI oracle that can solve rigorously-stated problems better than humans but has less capability in other areas of the mathematical process. Like him, I do not expect this to be the long-term situation--eventually I expect AI mathematics to exceed humans in every mathematical capability--but let's run with it for a second. What would mathematical activity look like with such an oracle? Jacob writes: "Well, you make a definition, and want to know if it’s the right one. You immediately ask your oracle a thousand questions. From “are these basic properties true” to “ooh, so is this deep conjecture true?” and start getting back answers, and amending your definitions. You could invent and resolve entire research directions in days. But the confusion you would have had to push through to flesh out your theory would largely (probably not entirely) be instantly resolved and the whole process sped up tremendously by your oracle. A big part of the process would be gone." I think this is where I most strongly disagree with what he writes. I think you start getting back answers, and then to continue, you have to UNDERSTAND them. And the dirty little secret of mathematics is that it's impossible to understand what anyone else is saying. Conveying one's mathematical intuition is incredibly hard: at least for me, the experience of acquiring understanding from someone else's work is nearly identical to that of discovering it on my own. Of course, what the mathematics of the future will look like depends (like all AI prognostication) on the precise shape of future AI capabilities; I do not think the picture of an uncreative oracle is realistic. I expect future AI mathematicians to be creative, and also, not to be oracles. I think a lot of the questions we view as fundamental will remain open for some time. Basic mathematical questions can be arbitrarily hard! And we will still want to understand them. Doing math Most of what I love about the practice of mathematics is: talking to colleagues about math, learning and understanding new things, developing intuition and resolving confusion, etc. My sense is that these parts of math survive with arbitrarily capable AI tools. I also like a lot of other aspects of the job: I get paid and can afford to eat, I have a lot of intellectual freedom, I have great colleagues (like Jacob), I don't have a boss and can work sprawled out on a couch. Absent a real attempt for the profession to adapt to the coming changes, it's possible that the shape of the profession changes in a way that makes it much less enjoyable, even as most of what I like about doing math survives. There are questions as to why society should support human mathematicians if and when machines have absolute advantage over us in all aspects of mathematics. I think we'll have advantage in some aspects of mathematics for some time, but it's worth thinking about this endpoing for the profession, as it is for all other professions. That said, I think there's a future here where we continue to ask basic questions about fundamental mathematical phenomema. Sometimes we get an answer from a machine, and sometimes the machine gets stuck, and so do we. And when we get stuck, we get frustrated--we get an itch--and we don't give up.

English
4
10
173
9.1K
Daniel Litt retweetledi
Wojtek Kopczuk 🇵🇱🇺🇦 and 🇺🇲
I feel seen: "But my primary emotional experience of problem-solving is not fun: it's frustration. I try to understand something and get confused and I HATE that feeling, and need to resolve it.“
Daniel Litt@littmath

This is a characteristically thoughtful and coherent account of mathematics from my colleague Jacob, and I agree with much of what he writes. But I want to push back on some aspects, which don't accord with my experience of or motivation for doing mathematics. Problem-solving I fully agree with Jacob that, as currently practiced, problem-solving is a fundamental aspect of doing mathematics; like Jacob, I identify as a "problem-solver" more than a "theory-builder." (A related axis: I identify more as a "frog" than a "bird.") Why do we solve problems? For some of us, it's more or less about enjoyment. That is NOT why I solve problems. I enjoy parts of that process: getting the solution, some little moments of understanding along the way. But my primary emotional experience of problem-solving is not fun: it's frustration. I try to understand something and get confused and I HATE that feeling, and need to resolve it. For a while my bio on here read "forever confused" -- that's not an exaggeration. I think the main reason I (and many other mathematicians) solve problems is that it's the only way we know how to ground ourselves in mathematical truth. Without solving problems and working out examples, our work inevitably devolves into bullshit. The activity of mathematics So is 80%+ of mathematics about problem-solving? I think this is a coherent account of mathematics but it's not my experience. Like Jacob and many other mathematicians my work is indeed guided by some big problems: for me, the Grothendieck-Katz p-curvature conjecture, some questions about mapping class groups, some questions about fundamental groups of algebraic varieties. Many of these problems have occupied me for a decade+ now. My experience of thinking about these problems is, perhaps paradoxically, not about "problem-solving." Rather, these problems benchmark our failure to understand certain fundamental phenomena: differential equations, surfaces, polynomials. It's useful to have rigorously stated problems like this to guide the field, but I think they have relatively little influence on my day-to-day work. That looks more like: trying to identify the most basic situation in which our understanding fails, and develop it in that basic situation. In this model, problem-solving is secondary: my typical experience is that I think I understand something new, often non-rigorously, and then try to operationalize it to solve some problems both to test the correctness of this understanding, and to measure its effectiveness. It's not uncommon in this model for a problem and its solution to appear at the exact same time. In fact, for me, it's somewhat unusual to write down a rigorous statement of a lemma that I do not already know how to prove, though this does of course happen. Oracles Jacob proposes the a thought experiment, where one has access to an AI oracle that can solve rigorously-stated problems better than humans but has less capability in other areas of the mathematical process. Like him, I do not expect this to be the long-term situation--eventually I expect AI mathematics to exceed humans in every mathematical capability--but let's run with it for a second. What would mathematical activity look like with such an oracle? Jacob writes: "Well, you make a definition, and want to know if it’s the right one. You immediately ask your oracle a thousand questions. From “are these basic properties true” to “ooh, so is this deep conjecture true?” and start getting back answers, and amending your definitions. You could invent and resolve entire research directions in days. But the confusion you would have had to push through to flesh out your theory would largely (probably not entirely) be instantly resolved and the whole process sped up tremendously by your oracle. A big part of the process would be gone." I think this is where I most strongly disagree with what he writes. I think you start getting back answers, and then to continue, you have to UNDERSTAND them. And the dirty little secret of mathematics is that it's impossible to understand what anyone else is saying. Conveying one's mathematical intuition is incredibly hard: at least for me, the experience of acquiring understanding from someone else's work is nearly identical to that of discovering it on my own. Of course, what the mathematics of the future will look like depends (like all AI prognostication) on the precise shape of future AI capabilities; I do not think the picture of an uncreative oracle is realistic. I expect future AI mathematicians to be creative, and also, not to be oracles. I think a lot of the questions we view as fundamental will remain open for some time. Basic mathematical questions can be arbitrarily hard! And we will still want to understand them. Doing math Most of what I love about the practice of mathematics is: talking to colleagues about math, learning and understanding new things, developing intuition and resolving confusion, etc. My sense is that these parts of math survive with arbitrarily capable AI tools. I also like a lot of other aspects of the job: I get paid and can afford to eat, I have a lot of intellectual freedom, I have great colleagues (like Jacob), I don't have a boss and can work sprawled out on a couch. Absent a real attempt for the profession to adapt to the coming changes, it's possible that the shape of the profession changes in a way that makes it much less enjoyable, even as most of what I like about doing math survives. There are questions as to why society should support human mathematicians if and when machines have absolute advantage over us in all aspects of mathematics. I think we'll have advantage in some aspects of mathematics for some time, but it's worth thinking about this endpoing for the profession, as it is for all other professions. That said, I think there's a future here where we continue to ask basic questions about fundamental mathematical phenomema. Sometimes we get an answer from a machine, and sometimes the machine gets stuck, and so do we. And when we get stuck, we get frustrated--we get an itch--and we don't give up.

English
3
3
33
4.6K
Daniel Litt retweetledi
Sam Tobin-Hochstadt
@littmath My sense is that the current trajectory is toward "can solve complicated problems basically within current theories" which is really different from an oracle for any well specified problem.
English
2
1
5
1.4K
Julian Bruns
Julian Bruns@BrunsJulian1541·
@littmath since we are in a community, no single member needs a mental model or internalise the goal of the entire profession. Infact, it can be beneficial if different mathematicians have different motivations, since it makes them focus on different aspects and discover different ideas.
English
1
0
6
486
Daniel Litt
Daniel Litt@littmath·
@samth FWIW I basically agree this is the current trajectory, but I also don’t see obvious obstructions to going beyond this in the next few years. I tried to gently push back on the oracle framing for this reason, though.
English
1
0
1
118
Sam Tobin-Hochstadt
@littmath Maybe this is just another instance of how all discussion about AI just reduces to questions about the trajectory of capabilities.
English
1
0
1
114
Daniel Litt
Daniel Litt@littmath·
@itaisher Right, just trying to use my best judgment! But my experience so far has been that (IMO) I've been *overestimating* whatever secret sauce humans have, not underestimating it.
English
1
0
9
246
Itai Sher
Itai Sher@itaisher·
@littmath It may seem qualitatively similar to parts of human thought processes, but we don’t have a good inventory of how we approach problems so I would think it is hard to know how far it is from complete for us.
English
2
0
2
242
Daniel Litt
Daniel Litt@littmath·
@itaisher That's true, and that's why I think humans will have some intangible advantages for some time. But we can also look at chain of thought from AI doing math and see that it looks pretty much like people doing math, so at present there doesn't seem to be this kind of divergence.
English
2
0
2
280
Itai Sher
Itai Sher@itaisher·
@littmath The space of possible theorems is vast. Maybe humans and AI will search through it in different ways. I don’t think mere mechanical possibility answers the question.
English
1
0
2
274
Daniel Litt
Daniel Litt@littmath·
Incidentally I somewhat regret the focus on "problem-solving" vs. "theory-building," which I think is more or less incidental to the underlying discussion about AI capabilities. I fully expect AI to do theory-building too, it just hasn't done so meaningfully yet.
English
0
0
16
1.7K
Daniel Litt
Daniel Litt@littmath·
I think this will happen *eventually* because the fact that people can do X is evidence that X is mechanically possible, and it seems pretty clear that people are not fully optimized for high quality mathematics. That I think it will happen in our lifetimes is just a function of watching various lines on graphs continue to go up without a principled way of predicting when they'll level off. But I wouldn't say this latter view is particularly strongly held.
English
1
0
15
743
Itai Sher
Itai Sher@itaisher·
@littmath One underlying assumption that isn’t clear to me is the assumption that AI will eventually be superior to humans in every mathematical capability. What is the argument for that? Also there is the human-AI combination, which may be superior to each individually.
English
5
0
6
1.1K
Daniel Litt
Daniel Litt@littmath·
@Jacob_Tsimerman (Especially surprising since I think of myself as a “problem-solver” a not a “theory-builder” as well!)
English
0
0
23
1.1K
Daniel Litt
Daniel Litt@littmath·
@Jacob_Tsimerman OK, this is actually quite different from my experience of doing math--much more different than I would have expected! About to go to sleep but will try to write something about the differences tomorrow.
English
1
0
62
3.2K
Daniel Litt retweetledi
jacob tsimerman
jacob tsimerman@Jacob_Tsimerman·
I want to clarify my thoughts on problem-solving in mathematics, and the potential consequences of AI for the field. For context, I’m quoting here my post in reply to Daniel Litt (who, echoing others, I find very clear, grounded, and insightful in his thinking). The claim The short version is that I think problem-solving is an immense, and pervasive part of modern mathematical research. Consequently, if human problem-solving disappears by virtue of the AIs becoming strictly and substantially better at it, then most of the time currently spent by modern mathematical researchers will have to be spent on an activity that is altogether pretty different. Whether such an activity is viable as a professional endeavour is something I am unsure of, but strongly encourage others to think about and try to envision, so that if/when the time comes, we can steer such a future into being. Allow me to make this somewhat concrete: by problem-solving I mean questions of the form “is T true? If so find a proof. If not, find a disproof.” where T is a precise mathematical statement. I’ll also include “find an example of S, if there is one” where S is some structure (variety/category/property/isomorphism/….). The argument Ok. Now as I said (and some have echoed) I spend ~all of my time problem-solving as my primary goal. This has sub-goals, but my entire main research field disappears if someone solves the Zilber-Pink Conjecture in its more general form. This is a single conjecture (precisely stated!) and lots of mathematicians, postdocs, and graduate students are engaged in picking apart special cases of it, trying strategies, finding analogies to develop intuition, etc.. Of course, lots of motivation and intuition and analogizing and understanding have gone into deciding to make the ZP conjecture a focus! But the fact remains that this is now what is being worked on ~all of the time by this community. This is true of many mathematicians. They have a problem (or ten) and spend most of their time doing it. If someone solves it, they have to find a different problem. This can be a big, disorienting process involving a lot of energy, and is neither trivial nor always fun (though often rewarding in the end). People have written a lot about Theory building vs. Problem-solving, and I want to first of all clarify I have nothing against theory building or theory builders! It is a valuable part of mathematics, and while there are differences in perspective between the “camps” there is way more mutual respect and agreement. However, I gather there is a perception that theory-builders spend most of their time not-problem-solving, and I think this is largely untrue. Now I’m not a theory-builder primarily (though I’ve partaken a LITTLE BIT by necessity) so I am outside of my comfort zone. As such, I apologize for mistakes and welcome corrections! But theory-building constantly runs through problem-solving. Let’s say you want to define the right notion of a cohomology theory. Of course you must make candidate definitions. But then what does it mean for it to be the right one? Well, you start asking if it has natural properties. These are T statements. Does it satisfy a Kunneth formula? Is it functorial in the right way? When you have the wrong one you have to find the properties it’s missing, and when you have the right one you have to prove that it indeed has those properties. Again, I am not saying nor do I believe that this makes problem-solving “real math” and theory-building lesser. I am just trying to draw attention to the way I think research mathematicians operate, and mathematics is practiced. To put all this a different way, imagine you had access to an AI oracle that could resolve statements T, but somehow lacked any creativity to build technology or make definitions (I think this is unlikely, but for the purpose of this thought experiment lets imagine it). How would your mathematics change, if you were a theory builder? Well, you make a definition, and want to know if it’s the right one. You immediately ask your oracle a thousand questions. From “are these basic properties true” to “ooh, so is this deep conjecture true?” and start getting back answers, and amending your definitions. You could invent and resolve entire research directions in days. But the confusion you would have had to push through to flesh out your theory would largely (probably not entirely) be instantly resolved and the whole process sped up tremendously by your oracle. A big part of the process would be gone. This is very very different to modern mathematics. One more thought This post is too long already, but I’ve seen some people say that they only do mathematics to find truth and others valourize that as the only virtuous way to be. I do not do mathematics only to find truth. I do it largely because I enjoy it and I am good at it. I also find it beautiful and am grateful I get to spend my days understanding beautiful things. But I enjoy the challenge, the process, resolving confusions, finding strategies, grappling with problems. I would like to push for this being de-stigmatized. Mathematicians are people who need money, housing, food, love, exercise, and a great deal of other stuff including various forms of meaning. There are many people whose primary enjoyment of math comes through problem solving in one of its incarnations. If that disappears, that is not a trivial issue and many of them might not want to do it anymore (even if there were some way to proceed).
jacob tsimerman@Jacob_Tsimerman

Hey @littmath , I've seen you post this sentiment a lot, and want to push back a bit (in my formal twitter-posting debut!). So my math career has almost entirely been "solve this problem". Now, of course there is an enormous amount of other activities such as formulating toy problems, identifying which ones are worthwhile, theory building, and selection and formulation of the original problems. But typically for 80%+ of my time, I know what problem I'm trying to solve and am just trying to solve it. I've seen the view expressed a lot that this is sort of not-the-main-point, and is just a part of figuring out the appropriate mathematical structure and phenomena. It's not that this is untenable, but I feel this is a somewhat overstated perspective. For one thing, talks (almost) always start/end with "here is the theorem I have proven". Its not that the view you have of math isn't coherent, but I could equally formulate the point of math as 1. Find a fun phenomenon 2. Make a problem capturing it as nicely as possible 3. Solve it, possibly by building theories and formulating sub-problems. so that problem-solving become the central point of math, and theory building as a side-effect. Indeed, mathematicians offer measure the value of a theory by the problems it can solve. This is at least an important part of a theory. I actually feel you and I are not far apart in our mathematical taste, so I'm curious how much we actually disagree here (I suspect the answer is "some"). Sorry for the overly long post! I am very much a twitter-newbie and will have to adjust.

English
21
44
330
67.5K
Daniel Litt retweetledi
Thomas Bloom
Thomas Bloom@thomasfbloom·
I hope that, in all of the publicity around recent AI solutions of Erdos problems, at least a few people have actually read the maths and learned some of the theory behind e.g. primitive sets. The role of these problems as AI headlines is secondary to some beautiful mathematics!
English
3
8
100
5.9K
Daniel Litt retweetledi
Alex Kontorovich
Alex Kontorovich@AlexKontorovich·
Congratulations to Vesselin Dimitrov and Yunqing Tang on winning the New Horizons Breakthrough @brkthroughprize Prize for their joint work with Frank Calegari on their solutions to the Unbounded Denominators Conjecture (for Fourier coefficients of modular forms on noncongruence groups) and irrationality of L(2,χ_{−3})! Richly deserved!!
Caltech@Caltech

Two Caltech professors were awarded 2026 New Horizons Prizes as part of the Breakthrough Foundation's annual prize ceremony, known colloquially as the "Oscars of Science." caltech.edu/about/news/cal…

English
2
6
62
8.6K
Daniel Litt
Daniel Litt@littmath·
FWIW I fully expect what’s happening with Erdős problems to happen to other areas too, likely within the next year or so. When I say this hasn’t happened yet, that’s all that I mean!
English
14
10
301
18.5K