
Ted Sanders
801 posts

Ted Sanders
@sandersted
Research at OpenAI. Be kind to others, and yourself.









OpenAI is shutting down its AI video slop-making platform Sora.







We need a grand unification between physics and computer science to understand the relationship between energy and information. Always nice to see work that brings them together.


Anthropic employees may be on the brink of getting very, very rich. Many of them, including its co-founders, have pledged to give a lot of that money away. If that money materializes, it could flood EA-aligned nonprofits with cash, including those aiming to regulate, audit, and review Anthropic itself. Whether that's good or bad depends on who you ask. I covered this potential wave of Anthropic wealth for @ReadTransformer:


To be more precise, 54% of all votes cast were for AI-written passages. The gap on certain passages is wider.






@eli_lifland I think AGI by end of 2027 should be ~8% now I think I'd forecast: ~2026-2030 -- AI replaces ~all AI researchers ~2027-2033 -- AI replaces ~all white collar industry ~2032-2040 -- AI replaces ~all human industry ~2033-2042 -- All humans dead or obsolete





When I interviewed Holden Karnofsky he gave a plausible justification for why Anthropic should remain at/near the AI frontier. ¹ But the case is far weaker for running far ahead of the competition on approx the most dangerous capability: autonomous SWE agents that can set off a recursive self-improvement loop. These capabilities are also now being stolen by less scrupulous actors: x.com/AnthropicAI/st… As Holden said: "If AIs were able to just do AI R&D I think there would be a significant chance of what I tend to call a capabilities explosion. And then everything else you're worried about from AI, all the other threats... all that comes onto the table really fast, and you're not going to have time to react to it." Full interview here: x.com/robertwiblin/s… ¹ If you believe technical alignment is very difficult and none of the current approaches have much of a chance I agree the case for this whole strategy is much weaker.



Rumors I’m hearing from people working on frontier models is that AGI is later this year, while AI hard-takeoff is just 2-3 years away.







