Shakeel
15.2K posts

Shakeel
@ShakeelHashim
Editor, @ReadTransformer. Prev: AI safety and EA comms, journalist @TheEconomist, @Protocol, @finimize

Abilene, TX should be the poster child for how data centers can destroy an area. Rents are now sky high, traffic is terrible, and residents are furious. This in a city with water scarcity.

I sincerely don't understand what people mean when they say this. On the one hand, every AI researcher is already using Claude Code (or its competitors) to help them develop new architectures. OTOH, AI models do not have bodies so they can't build data centers


I've spent the past few weeks reading 100s of public data sources about AI development. I now believe that recursive self-improvement has a 60% chance of happening by the end of 2028. In other words, AI systems might soon be capable of building themselves.



Create Google Slides in Codex without opening your browser, clicking buttons, and manually aligning figures. Plus, you (and your team) can view the progress in realtime. Codex isn't creating the deck locally, then uploading it. It's actually iteratively building it, checking it's work and polishing it. Imagine kicking-off hundreds of these decks over night, and polishing them the next morning (or en-route to the meeting haha). Wild.


Official announcement from Anthropic:

i have no opinion on this guy or his work and also 200 grand for effectively saying “ai is gonna kill us all” makes me feel like i made some bad life choices in my studies


Adding to my replies below, and extending this debate among @CFR_org colleagues, I note that @edwardfishman’s new @ForeignAffairs essay drills down on what it takes to have an effective Geoeconomic weapon. One requirement is that the target country can’t figure out a new way of getting what it wants on about a one year horizon. If we define what China wants as being an Nvidia-quality firm, the US definitely has a Geoeconomic weapon. But if we define China’s goal as replicating US frontier AI capability, then clearly China is able to fast follow—it is only a few months behind.


OpenAI hired some PR pros to manage perception. First it was drunk sam. now it's virtue washing greg + a coordinated attempt to orchestrate the Cult narrative at anthropic. They are coordinating to shape your perception because people don't like what they see when they look clearly at OpenAI. Expect more of this...

That AI number is a big fat jump ball for Dems to seize

it has been a real pleasure to work with Greg over the past decade. i feel very lucky. this post held up pretty well, but not did not sufficiently highlight his technical brilliance and sheer determination. blog.samaltman.com/greg



I again think that a16z, OpenAI gov affairs, and all the other political accelerationists should focus more on combatting this stuff than on blocking frontier AI regulations



