Aakash Gupta@aakashgupta
Everyone’s focused on the $400 million battery price tag. That misses the point entirely.
xAI just told you power is now the binding constraint on AI training, and most people haven’t repriced what that means.
The math: Colossus 2 needs 1+ gigawatts to run at full capacity. That’s 40% of Memphis’s peak summer demand for one building. Grid interconnection queues now average 8+ years. Gas turbine delivery has stretched from 2 years to 4.5 years. xAI’s answer? Buy $400M in batteries, acquire a decommissioned Duke Energy plant across the state line in Mississippi, install 7 Titan-350 turbines generating 250MW, and build your own substations rather than wait for the utility.
They’re not buying batteries for backup. They’re buying time.
A Megapack stores 3.9 MWh. At $266/kWh, Tesla’s selling 420 units for roughly $950K each wholesale. But the value proposition isn’t the sticker price. It’s that GPUs sitting unpowered still depreciate. Satya Nadella said it directly: he doesn’t want to get stuck with “four or five years of depreciation on one generation” of chips. xAI has the same problem, but worse. They’ve got 110,000 GB200 NVL72 GPUs at Colossus 2 alone, each requiring power to generate any return.
Every day a GPU doesn’t train, you’re paying depreciation without generating value. That’s the real cost.
This tells you something critical about the AI infrastructure race: the winners won’t be whoever has the most GPUs. It’ll be whoever can power them first. Microsoft has chips sitting idle in data centers right now because they can’t get enough electricity. OpenAI, Meta, Anthropic all face the same constraint.
xAI’s edge isn’t capital or even Nvidia allocations. It’s that Musk can deploy Tesla batteries, buy gas plants, and build infrastructure outside normal permitting timelines. The 122-day Colossus 1 build wasn’t a fluke. It was a proof of concept that you can move faster than the grid if you’re willing to go around it.
The Megapacks aren’t a battery backup. They’re a bypass.