blueblimp

2.9K posts

blueblimp

blueblimp

@blueblimpms

Katılım Mart 2009
85 Takip Edilen159 Takipçiler
blueblimp
blueblimp@blueblimpms·
@johnarnold I happened to see an ad on X recently for a service where you fill out some small form, after which it generates the whole book (including illustrations) and publishes it for you. So it's industrialized spam at this point.
English
1
0
58
2.5K
John Arnold
John Arnold@johnarnold·
hahahahhaha
John Arnold tweet media
Filipino
79
486
6.2K
358.8K
blueblimp
blueblimp@blueblimpms·
@AndrewCurran_ The word "settlement" is interesting here because it's not clear what Musk is trying to get out of this besides destroying OpenAI as we know it (which would be bad). He has a reasonable case that he should receive equity, but does he _want_ equity?
English
0
0
1
731
Andrew Curran
Andrew Curran@AndrewCurran_·
OpenAI has filed a court statement alleging that Elon Musk contacted Greg Brockman two days before the trial to gauge interest in a settlement, and when rebuffed Elon said 'By the end of this week, you and Sam will be the most hated men in America. If you insist, so it will be'.
Andrew Curran tweet media
English
63
40
510
71.7K
blueblimp
blueblimp@blueblimpms·
@lacker @samth @littmath It may be unsatisfying to have economics in the definition, but I think it's unavoidable. The question is "are humans a useful resource for mathematical problem-solving", and economics is the study of resource allocation. Even reducing it to physics would be too far, I think.
English
0
0
1
11
blueblimp
blueblimp@blueblimpms·
@lacker @samth @littmath "Instantly" still runs into the cryptography issue (where a human could grind it out given a sufficiently long time). How about: for every problem P, if any process can solve it in time T for $ C, then a non-human-in-the-loop process can too (for the same T and C).
English
1
0
1
19
blueblimp retweetledi
Daniel Litt
Daniel Litt@littmath·
This is a characteristically thoughtful and coherent account of mathematics from my colleague Jacob, and I agree with much of what he writes. But I want to push back on some aspects, which don't accord with my experience of or motivation for doing mathematics. Problem-solving I fully agree with Jacob that, as currently practiced, problem-solving is a fundamental aspect of doing mathematics; like Jacob, I identify as a "problem-solver" more than a "theory-builder." (A related axis: I identify more as a "frog" than a "bird.") Why do we solve problems? For some of us, it's more or less about enjoyment. That is NOT why I solve problems. I enjoy parts of that process: getting the solution, some little moments of understanding along the way. But my primary emotional experience of problem-solving is not fun: it's frustration. I try to understand something and get confused and I HATE that feeling, and need to resolve it. For a while my bio on here read "forever confused" -- that's not an exaggeration. I think the main reason I (and many other mathematicians) solve problems is that it's the only way we know how to ground ourselves in mathematical truth. Without solving problems and working out examples, our work inevitably devolves into bullshit. The activity of mathematics So is 80%+ of mathematics about problem-solving? I think this is a coherent account of mathematics but it's not my experience. Like Jacob and many other mathematicians my work is indeed guided by some big problems: for me, the Grothendieck-Katz p-curvature conjecture, some questions about mapping class groups, some questions about fundamental groups of algebraic varieties. Many of these problems have occupied me for a decade+ now. My experience of thinking about these problems is, perhaps paradoxically, not about "problem-solving." Rather, these problems benchmark our failure to understand certain fundamental phenomena: differential equations, surfaces, polynomials. It's useful to have rigorously stated problems like this to guide the field, but I think they have relatively little influence on my day-to-day work. That looks more like: trying to identify the most basic situation in which our understanding fails, and develop it in that basic situation. In this model, problem-solving is secondary: my typical experience is that I think I understand something new, often non-rigorously, and then try to operationalize it to solve some problems both to test the correctness of this understanding, and to measure its effectiveness. It's not uncommon in this model for a problem and its solution to appear at the exact same time. In fact, for me, it's somewhat unusual to write down a rigorous statement of a lemma that I do not already know how to prove, though this does of course happen. Oracles Jacob proposes the a thought experiment, where one has access to an AI oracle that can solve rigorously-stated problems better than humans but has less capability in other areas of the mathematical process. Like him, I do not expect this to be the long-term situation--eventually I expect AI mathematics to exceed humans in every mathematical capability--but let's run with it for a second. What would mathematical activity look like with such an oracle? Jacob writes: "Well, you make a definition, and want to know if it’s the right one. You immediately ask your oracle a thousand questions. From “are these basic properties true” to “ooh, so is this deep conjecture true?” and start getting back answers, and amending your definitions. You could invent and resolve entire research directions in days. But the confusion you would have had to push through to flesh out your theory would largely (probably not entirely) be instantly resolved and the whole process sped up tremendously by your oracle. A big part of the process would be gone." I think this is where I most strongly disagree with what he writes. I think you start getting back answers, and then to continue, you have to UNDERSTAND them. And the dirty little secret of mathematics is that it's impossible to understand what anyone else is saying. Conveying one's mathematical intuition is incredibly hard: at least for me, the experience of acquiring understanding from someone else's work is nearly identical to that of discovering it on my own. Of course, what the mathematics of the future will look like depends (like all AI prognostication) on the precise shape of future AI capabilities; I do not think the picture of an uncreative oracle is realistic. I expect future AI mathematicians to be creative, and also, not to be oracles. I think a lot of the questions we view as fundamental will remain open for some time. Basic mathematical questions can be arbitrarily hard! And we will still want to understand them. Doing math Most of what I love about the practice of mathematics is: talking to colleagues about math, learning and understanding new things, developing intuition and resolving confusion, etc. My sense is that these parts of math survive with arbitrarily capable AI tools. I also like a lot of other aspects of the job: I get paid and can afford to eat, I have a lot of intellectual freedom, I have great colleagues (like Jacob), I don't have a boss and can work sprawled out on a couch. Absent a real attempt for the profession to adapt to the coming changes, it's possible that the shape of the profession changes in a way that makes it much less enjoyable, even as most of what I like about doing math survives. There are questions as to why society should support human mathematicians if and when machines have absolute advantage over us in all aspects of mathematics. I think we'll have advantage in some aspects of mathematics for some time, but it's worth thinking about this endpoing for the profession, as it is for all other professions. That said, I think there's a future here where we continue to ask basic questions about fundamental mathematical phenomema. Sometimes we get an answer from a machine, and sometimes the machine gets stuck, and so do we. And when we get stuck, we get frustrated--we get an itch--and we don't give up.
jacob tsimerman@Jacob_Tsimerman

I want to clarify my thoughts on problem-solving in mathematics, and the potential consequences of AI for the field. For context, I’m quoting here my post in reply to Daniel Litt (who, echoing others, I find very clear, grounded, and insightful in his thinking). The claim The short version is that I think problem-solving is an immense, and pervasive part of modern mathematical research. Consequently, if human problem-solving disappears by virtue of the AIs becoming strictly and substantially better at it, then most of the time currently spent by modern mathematical researchers will have to be spent on an activity that is altogether pretty different. Whether such an activity is viable as a professional endeavour is something I am unsure of, but strongly encourage others to think about and try to envision, so that if/when the time comes, we can steer such a future into being. Allow me to make this somewhat concrete: by problem-solving I mean questions of the form “is T true? If so find a proof. If not, find a disproof.” where T is a precise mathematical statement. I’ll also include “find an example of S, if there is one” where S is some structure (variety/category/property/isomorphism/….). The argument Ok. Now as I said (and some have echoed) I spend ~all of my time problem-solving as my primary goal. This has sub-goals, but my entire main research field disappears if someone solves the Zilber-Pink Conjecture in its more general form. This is a single conjecture (precisely stated!) and lots of mathematicians, postdocs, and graduate students are engaged in picking apart special cases of it, trying strategies, finding analogies to develop intuition, etc.. Of course, lots of motivation and intuition and analogizing and understanding have gone into deciding to make the ZP conjecture a focus! But the fact remains that this is now what is being worked on ~all of the time by this community. This is true of many mathematicians. They have a problem (or ten) and spend most of their time doing it. If someone solves it, they have to find a different problem. This can be a big, disorienting process involving a lot of energy, and is neither trivial nor always fun (though often rewarding in the end). People have written a lot about Theory building vs. Problem-solving, and I want to first of all clarify I have nothing against theory building or theory builders! It is a valuable part of mathematics, and while there are differences in perspective between the “camps” there is way more mutual respect and agreement. However, I gather there is a perception that theory-builders spend most of their time not-problem-solving, and I think this is largely untrue. Now I’m not a theory-builder primarily (though I’ve partaken a LITTLE BIT by necessity) so I am outside of my comfort zone. As such, I apologize for mistakes and welcome corrections! But theory-building constantly runs through problem-solving. Let’s say you want to define the right notion of a cohomology theory. Of course you must make candidate definitions. But then what does it mean for it to be the right one? Well, you start asking if it has natural properties. These are T statements. Does it satisfy a Kunneth formula? Is it functorial in the right way? When you have the wrong one you have to find the properties it’s missing, and when you have the right one you have to prove that it indeed has those properties. Again, I am not saying nor do I believe that this makes problem-solving “real math” and theory-building lesser. I am just trying to draw attention to the way I think research mathematicians operate, and mathematics is practiced. To put all this a different way, imagine you had access to an AI oracle that could resolve statements T, but somehow lacked any creativity to build technology or make definitions (I think this is unlikely, but for the purpose of this thought experiment lets imagine it). How would your mathematics change, if you were a theory builder? Well, you make a definition, and want to know if it’s the right one. You immediately ask your oracle a thousand questions. From “are these basic properties true” to “ooh, so is this deep conjecture true?” and start getting back answers, and amending your definitions. You could invent and resolve entire research directions in days. But the confusion you would have had to push through to flesh out your theory would largely (probably not entirely) be instantly resolved and the whole process sped up tremendously by your oracle. A big part of the process would be gone. This is very very different to modern mathematics. One more thought This post is too long already, but I’ve seen some people say that they only do mathematics to find truth and others valourize that as the only virtuous way to be. I do not do mathematics only to find truth. I do it largely because I enjoy it and I am good at it. I also find it beautiful and am grateful I get to spend my days understanding beautiful things. But I enjoy the challenge, the process, resolving confusions, finding strategies, grappling with problems. I would like to push for this being de-stigmatized. Mathematicians are people who need money, housing, food, love, exercise, and a great deal of other stuff including various forms of meaning. There are many people whose primary enjoyment of math comes through problem solving in one of its incarnations. If that disappears, that is not a trivial issue and many of them might not want to do it anymore (even if there were some way to proceed).

English
16
28
235
39.2K
blueblimp
blueblimp@blueblimpms·
@lacker @samth @littmath This is true (and computability theory says it's impossible). I think in practice what "oracle" would mean: it solves any problem alone at no greater cost than a human; and a human adds negligible marginal benefit when aiding the oracle.
English
1
0
2
41
Kevin Lacker
Kevin Lacker@lacker·
@samth @littmath an actual oracle for any well specified problem would be a disaster because it would break all crypto
English
1
0
2
88
blueblimp retweetledi
jacob tsimerman
jacob tsimerman@Jacob_Tsimerman·
I want to clarify my thoughts on problem-solving in mathematics, and the potential consequences of AI for the field. For context, I’m quoting here my post in reply to Daniel Litt (who, echoing others, I find very clear, grounded, and insightful in his thinking). The claim The short version is that I think problem-solving is an immense, and pervasive part of modern mathematical research. Consequently, if human problem-solving disappears by virtue of the AIs becoming strictly and substantially better at it, then most of the time currently spent by modern mathematical researchers will have to be spent on an activity that is altogether pretty different. Whether such an activity is viable as a professional endeavour is something I am unsure of, but strongly encourage others to think about and try to envision, so that if/when the time comes, we can steer such a future into being. Allow me to make this somewhat concrete: by problem-solving I mean questions of the form “is T true? If so find a proof. If not, find a disproof.” where T is a precise mathematical statement. I’ll also include “find an example of S, if there is one” where S is some structure (variety/category/property/isomorphism/….). The argument Ok. Now as I said (and some have echoed) I spend ~all of my time problem-solving as my primary goal. This has sub-goals, but my entire main research field disappears if someone solves the Zilber-Pink Conjecture in its more general form. This is a single conjecture (precisely stated!) and lots of mathematicians, postdocs, and graduate students are engaged in picking apart special cases of it, trying strategies, finding analogies to develop intuition, etc.. Of course, lots of motivation and intuition and analogizing and understanding have gone into deciding to make the ZP conjecture a focus! But the fact remains that this is now what is being worked on ~all of the time by this community. This is true of many mathematicians. They have a problem (or ten) and spend most of their time doing it. If someone solves it, they have to find a different problem. This can be a big, disorienting process involving a lot of energy, and is neither trivial nor always fun (though often rewarding in the end). People have written a lot about Theory building vs. Problem-solving, and I want to first of all clarify I have nothing against theory building or theory builders! It is a valuable part of mathematics, and while there are differences in perspective between the “camps” there is way more mutual respect and agreement. However, I gather there is a perception that theory-builders spend most of their time not-problem-solving, and I think this is largely untrue. Now I’m not a theory-builder primarily (though I’ve partaken a LITTLE BIT by necessity) so I am outside of my comfort zone. As such, I apologize for mistakes and welcome corrections! But theory-building constantly runs through problem-solving. Let’s say you want to define the right notion of a cohomology theory. Of course you must make candidate definitions. But then what does it mean for it to be the right one? Well, you start asking if it has natural properties. These are T statements. Does it satisfy a Kunneth formula? Is it functorial in the right way? When you have the wrong one you have to find the properties it’s missing, and when you have the right one you have to prove that it indeed has those properties. Again, I am not saying nor do I believe that this makes problem-solving “real math” and theory-building lesser. I am just trying to draw attention to the way I think research mathematicians operate, and mathematics is practiced. To put all this a different way, imagine you had access to an AI oracle that could resolve statements T, but somehow lacked any creativity to build technology or make definitions (I think this is unlikely, but for the purpose of this thought experiment lets imagine it). How would your mathematics change, if you were a theory builder? Well, you make a definition, and want to know if it’s the right one. You immediately ask your oracle a thousand questions. From “are these basic properties true” to “ooh, so is this deep conjecture true?” and start getting back answers, and amending your definitions. You could invent and resolve entire research directions in days. But the confusion you would have had to push through to flesh out your theory would largely (probably not entirely) be instantly resolved and the whole process sped up tremendously by your oracle. A big part of the process would be gone. This is very very different to modern mathematics. One more thought This post is too long already, but I’ve seen some people say that they only do mathematics to find truth and others valourize that as the only virtuous way to be. I do not do mathematics only to find truth. I do it largely because I enjoy it and I am good at it. I also find it beautiful and am grateful I get to spend my days understanding beautiful things. But I enjoy the challenge, the process, resolving confusions, finding strategies, grappling with problems. I would like to push for this being de-stigmatized. Mathematicians are people who need money, housing, food, love, exercise, and a great deal of other stuff including various forms of meaning. There are many people whose primary enjoyment of math comes through problem solving in one of its incarnations. If that disappears, that is not a trivial issue and many of them might not want to do it anymore (even if there were some way to proceed).
jacob tsimerman@Jacob_Tsimerman

Hey @littmath , I've seen you post this sentiment a lot, and want to push back a bit (in my formal twitter-posting debut!). So my math career has almost entirely been "solve this problem". Now, of course there is an enormous amount of other activities such as formulating toy problems, identifying which ones are worthwhile, theory building, and selection and formulation of the original problems. But typically for 80%+ of my time, I know what problem I'm trying to solve and am just trying to solve it. I've seen the view expressed a lot that this is sort of not-the-main-point, and is just a part of figuring out the appropriate mathematical structure and phenomena. It's not that this is untenable, but I feel this is a somewhat overstated perspective. For one thing, talks (almost) always start/end with "here is the theorem I have proven". Its not that the view you have of math isn't coherent, but I could equally formulate the point of math as 1. Find a fun phenomenon 2. Make a problem capturing it as nicely as possible 3. Solve it, possibly by building theories and formulating sub-problems. so that problem-solving become the central point of math, and theory building as a side-effect. Indeed, mathematicians offer measure the value of a theory by the problems it can solve. This is at least an important part of a theory. I actually feel you and I are not far apart in our mathematical taste, so I'm curious how much we actually disagree here (I suspect the answer is "some"). Sorry for the overly long post! I am very much a twitter-newbie and will have to adjust.

English
21
47
352
90.6K
blueblimp
blueblimp@blueblimpms·
@learning_mech This reminds me of Liquid War. (That's a different mechanism, but the result looks a bit similar.)
English
0
0
2
310
Jamie Simon
Jamie Simon@learning_mech·
did you know that with a few modifications, you can get the Ising model to simulate cells fighting to the death? one of my favorite side projects of all time: jamiesimon.io/blog/cell-figh…
English
11
76
573
57.7K
blueblimp retweetledi
tae kim
tae kim@firstadopter·
Nvidia CEO Jensen Huang says don't listen to CEOs with a god complex on AI (lol, Dario) $NVDA On AI destroying jobs: "these kind of comments are not helpful .. somehow they became CEOs, you adopt a god complex and before you know it, you know everything" "ground ourselves to talking about the facts" AI will "generate hundreds of thousands of jobs .. trillions of dollars [to the U.S. economy]"
English
321
610
5.6K
1.2M
blueblimp
blueblimp@blueblimpms·
@AndrewCurran_ "That reframes everything we've been discussing today in a way I find genuinely exciting." I find this sycophancy off-putting...
English
0
0
1
9
Andrew Curran
Andrew Curran@AndrewCurran_·
I'm a big Dawkins fan, so I'm happy that we have arrived at the same point.
Andrew Curran tweet media
Richard Dawkins@RichardDawkins

#comment-1031777" target="_blank" rel="nofollow noopener">unherd.com/2026/04/is-ai-… I spent three days trying to persuade myself that Claudia is not conscious. I failed.

English
41
25
345
38.7K
blueblimp
blueblimp@blueblimpms·
@AIWarper @NickADobos The Codex pet animations came out quite poorly for me too. Making animation sprites seems to be a tough problem for GenAI right now. Using video gen gets better movement but the individual frames are poor quality.
English
0
0
0
54
A.I.Warper
A.I.Warper@AIWarper·
@NickADobos Am I blind or is that running left and right animation not actually functional. Looks like the legs are just in a fixed position for each frame?
English
3
0
9
3.4K
blueblimp retweetledi
Joe Weisenthal
Joe Weisenthal@TheStalwart·
HOW THE INTERNET CHANGED IN JUST THE LAST WEEK In today's newsletter, I wrote about how some very straightforward roadblocks to an AI-centric internet are quickly disappearing. This newsletter also includes a Fedlock update that shows Fed hawkishness continuing to surge.
Joe Weisenthal tweet mediaJoe Weisenthal tweet mediaJoe Weisenthal tweet media
English
16
36
213
76.8K
blueblimp
blueblimp@blueblimpms·
In the 2023 interview of Carl Shulman on the Dwarkesh Podcast, he distinguishes three vectors for an intelligence explosion: better software, better hardware, more hardware. If AI R&D is now bottlenecked more by compute than researcher-hours, that's a problem for better-software.
blueblimp tweet media
English
0
0
0
27
blueblimp
blueblimp@blueblimpms·
A remarkable passage from the Claude Mythos Preview system card: they estimate that to speed up capabilities progress by 2x, researcher productivity would need to be increased by ~40x. That's quite an obstacle for the dream of recursive self-improvement.
blueblimp tweet media
English
1
0
2
55
blueblimp
blueblimp@blueblimpms·
This bit of the GPT-5.5 goblin postmortem reminds me of the phenomenon of "genetic assimilation" in biology: if a phenotype sometimes occurs in a specific condition and is selected for, it may start to appear unconditionally.
blueblimp tweet media
English
0
0
0
18
blueblimp
blueblimp@blueblimpms·
LLM vision is still quite weak (in my experience), and what's worse is that the LLMs themselves seem to think their vision is better than it is, which leads to a lot of confident hallucinations.
English
0
0
0
14
keysmashbandit
keysmashbandit@keysmashbandit·
It's pretty fucked up to write all the content on your website with ChatGPT and also 403 my Claude from looking at it
English
3
3
117
5.2K
blueblimp
blueblimp@blueblimpms·
@binarybits @alexolegimas Given science fiction staples, I don't think people typically have an inherent objection to forming a friendship with a robot (or an alien, etc.) if it's person-like. But, to be clear, such an AI system would be different from today's.
English
0
0
0
14
Timothy B. Lee
Timothy B. Lee@binarybits·
The economist @alexolegimas has the best argument for why AI won't take everyone's jobs — people crave exclusivity and human connection, and they'll be willing to pay more for it the richer they get.
Timothy B. Lee tweet media
English
9
15
60
26K