Maggie
238 posts




🚨Everyone thinks GPT can do math they're wrong new paper called SenseMath just proved LLMs don't have number sense at all this changes everything about how we should use them:



huge huge day today. times are changing.



My Take on This Upcoming models are strong ( but not at this level - you can trust me on this 😉 ) The problem is that due to high Hopes , the upcoming release will feel meh ( it's extraordinary but not super intelligence) Coding is Solved Btw 👀 , science and math is next Google , OpenAI , Anthropic are on the verge of creating good scientific models that are really good in reasoning and will be very helpful in research






big week coming up








GPT-6 rumored for April 14 from @iruletheworldmo Unverified leak but the details are worth watching through a hardware lens 2M token context window Natively multimodal $2.50/$12 per million tokens “Superapp” merging ChatGPT, Codex, and Atlas browser into one agent Now connect this to what just happened with @AnthropicAI @bcherny confirmed Anthropic is deprioritizing third-party tool access because agentic workloads don’t fit their capacity model. OpenClaw got cut from subscriptions. Token rationing is here Two of the biggest AI labs are telling you the same thing at the same time: Agentic compute demand is outrunning infrastructure Anthropic can’t efficiently serve it on TPUs. OpenAI is allegedly throwing every GPU at their next model and killing Sora to free up capacity. Both are rationing access This is the Co-Design Thesis The models are ready. The software is ready. The hardware isn’t keeping up. And the hardware that handles agentic workloads best (dynamic, unpredictable, CPU+GPU coherent, memory bandwidth dense) is NVIDIA 2M token context? That’s a Memory Wars problem. KV cache at that scale requires HBM4-class bandwidth. There is no workaround Every frontier lab is converging on the same bottleneck. And Jensen has been building for it for three years The race isn’t to build the best model anymore anon The race is to serve it


















