

Toviah Moldwin
3.3K posts

@TMoldwin
Computational neuroscientist @ELSCbrain @Segev_Lab. Singer and guitarist for the rock band @SynfireChain. Dualist. Founder, https://t.co/YMBvt487lT.






Ok this figure is pretty intimidating...












Being on your laptop outside is a miserable experience and im tired of people pretending it's not


Morgan Stanley predicts a massive AI breakthrough driven by a huge spike in computing power across major U.S. laboratories. Increasing the amount of hardware used for training by 10x can effectively double the intelligence of these models. The recently released GPT-5.4 Thinking model already matches human experts on professional tasks with a score of 83% on the GDPVal benchmark. The biggest hurdle for this growth is an energy crisis, with the U.S. power grid facing a shortfall of 18 gigawatts by December-28. To keep running, developers are bypassing the grid by taking over Bitcoin mining sites and using natural gas turbines for their AI factories. This shift is creating a solid investment cycle where 15-year leases on data centers generate high financial yields for every watt consumed. Large companies are already reducing their staff numbers because these new AI tools can perform professional work for a tiny fraction of the cost. Researchers expect AI to begin recursive self-improvement by June-27, meaning the software will autonomously upgrade its own code without human help. The future economy will likely treat raw intelligence as a commodity that is manufactured by these massive computing and energy clusters.


What monotheism means is surprisingly hard to pin down, but there’s a reason it swept the world. A spate of new books explore the topic. newyorkermag.visitlink.me/mEU-nv



In the last few months, I've spoken to many CS professors who asked me if we even need CS PhD students anymore. Now that we have coding agents, can't professors work directly with agents? My view is that equipping PhD students with coding agents will allow them to do work that is orders of magnitude more impressive than they otherwise could. And they can be *accountable* for their outcomes in a way agents can't (yet). For example, who checks the agent's outputs are correct? Who is responsible for mistakes or errors?
