Post

Austin Meyer
Austin Meyer@austingmeyer·
It seems like we might be getting carried away. Also, Biochemistry as math intensive? I think there were no required math courses during my PhD in Biochemistry.
David Shapiro (L/0)@DaveShapi

AI has *solved* math. OpenAI did it with o4 Not "is close to solving math" Not "is competitive at math" *SOLVED* This is far bigger than anyone realizes. Let me explain why. First, you need to understand some historical context. Typically, with AI/ML you know that you're getting *close* to fully generalizing a problem space when you get into the 70% and 80% solution range. However, as we often see, "reality is in the edge cases" meaning that the last mile jump from 80% to 99% is often far harder. But OpenAI did that not in years, but months. Remember, o1 and o3 were announced in September of last year. It's been just over 8 calendar months and they closed the gap. Just from a research and development perspective, this is a remarkable velocity. But I'm not talking about benchmarks. I'm talking about real world implications. This puts a World Class mathematician in every pocket, on every team. Do you know what math underpins? Pretty much everything. The first order consequence of a semi-agentic AI system that has conquered math is pretty obvious: anything that requires math, it can likely solve on its own, or with very little redirection. For example, a good friend of mine is in CFD (computational fluid dynamics), which is used for oceanography and meteorology extensively. He's been using reasoning models since they came out and they were helpful to him, but still needed his expert guiding hand. These new models might nuke that. Second order consequences (downstream impacts) from this are difficult to predict, but not difficult to overstate. Let's put second order consequences in practical terms: This will accelerate AI research itself. AI research is, among other things, math. It's also code. Guess what these models crushed? Math and code. Beyond that, they are semi-autonomous i.e. "partially agentic" - they require less human direction, correction, and oversight. In practical terms, it means they can use more tools without help, work on larger, longer problems without supervision, and are less likely to make mistakes around user intent. Guess what else is math intensive? Biochemistry Robotics Spaceflight Cryptography Nuclear physics Blockchain Now, to make this even more impressive, these models did this with ONE TOOL: Python. Not series of tools. Not MatLab. Not supercomputers. Now let me underscore what this means in the long run: your smartphone will be a math genius before too long. And a coding genius. And a linguistics genius. And... and... Third, fourth, and fifth order consequences of this one technology alone are impossible to overstate, and this technology will only get better. You know that scene where Tony Stark is figuring out time travel in his kitchen? Yeah, that's the level of AI math we're talking about in just one or two more generations. If warp drive is possible, these machines will help us figure it out.

English
0
0
0
121
Paylaş