Brandon

3.4K posts

Brandon banner
Brandon

Brandon

@brandonkumar

cofounder of layer3, prev investor at accolade partners

🇺🇸 Katılım Ocak 2016
4.2K Takip Edilen8.1K Takipçiler
Sabitlenmiş Tweet
Brandon
Brandon@brandonkumar·
(New Essay) The Machinery of Modern Finance On why I think the next ten years of financial innovation will surpass the prior fifty. Give it a read. brandonkumar.substack.com/p/the-machiner…
Brandon tweet media
English
26
179
949
148K
Brandon
Brandon@brandonkumar·
This was great bc you can tell how switched on every single person on the Starship team is
English
0
0
4
420
Brandon
Brandon@brandonkumar·
Hemant: Let's be the first VC to own a hospital system Josh: Let's be the first VC to own the Giants
English
0
0
6
471
Brandon
Brandon@brandonkumar·
Thrive on a generational run
English
2
0
7
612
Brandon
Brandon@brandonkumar·
good will hunting scene with the saas guy who used to be a crypto guy who used to be a direct to consumer guy who used to be a gig economy guy and is now a small modular reactor guy
English
0
0
9
554
Brandon
Brandon@brandonkumar·
This type of analysis would have taken hours a few years ago and probably took < 30 minutes with NotebookLM
Shanu Mathew@ShanuMathew93

Trying to bridge thoughts from different sources & podcast given the focus around the AI buildout. How much data center capacity is actually coming online per year — and who is absorbing it? I've been trying to square some numbers across multiple sources on real, energized gigawatts being added annually in the US, who's consuming them, and what we actually know versus what's estimated. The installed base FERC confirmed in their March 2026 State of the Markets report that US data center capacity exceeded 50 GW at year-end 2025. Industry estimates put total US capacity in the 35-40 GW range at year-end 2024 (Bain was at ~35GW, Morgan Stanley's model pegged it at 37 GW). That implies roughly 10-15 GW of net additions in 2025, a massive step-up from prior years. Total facility power, critical IT load, and hyperscale-only all produce different baselines — I haven't seen two sources use the same definition consistently. Frontier labs Brad Gerstner @altcap (investor in both @OpenAI and @AnthropicAI) says OAI and Anthropic have 1.5-2 GW each today, going to ~5 GW by year-end. @dylan522p at @SemiAnalysis (@dwarkesh_sp @dwarkeshpodcast Podcast, March 2026): Both at roughly 2-2.5 GW today. Both reach 5-6 GW by year-end 2026, OpenAI slightly higher. Both targeting ~10 GW by end of 2027. @sarahfriar disclosed 1.9 GW for OpenAI at year-end 2025. Anthropic's operational capacity is likely in the 1.5-2 GW range. On year-end targets, there's a wide gap between what's been contracted (Stargate US + UAE, NVIDIA 10 GW partnership, CoreWeave, Google TPU mega-deal) and what will physically be energized by December. Dylan's 5-6 GW per lab is likely the more physically grounded number, built bottom-up. Per Dylan, Anthropic was conservative on locking up compute early while OpenAI signed aggressively with Microsoft, CoreWeave, Oracle, & even SoftBank Energy — so Anthropic has to now pay premium rental rates or go to lower-quality providers to catch up (but Gerstner's comments made it sound like the take rate wasn't that high). Neither leading lab owns or builds data centers. Their ~6 GW of combined incremental capacity in 2026 is physically built and operated by AWS, Google, Microsoft, CoreWeave, Oracle, and others but contractually dedicated to serving OpenAI and Anthropic workloads. Assume a meaningful chunk of AWS's disclosed additions goes to Anthropic's Trainium/Rainier clusters, and a meaningful chunk of CoreWeave's build goes to OpenAI. CoreWeave also recently signed a multi-year agreement to support Anthropic's Claude models, with new capacity coming online in 2026. Frontier lab demand and hyperscaler supply overlap — they are not additive. Hyperscaler disclosures on physical delivery These are a mix of US and global figures, and facility power vs. IT load definitions vary across companies. Amazon (AWS): @ajassy disclosed AWS added 3.9 GW of new power capacity in 2025 (1.2 GW in Q4 alone). Operating from a base of roughly 8 GW at year-end 2025, with a target to double total capacity by year-end 2027 implying ~16 GW total. Still describes demand as outpacing supply. AWS operates 38 regions across 27 countries = the 3.9 GW is almost certainly global, not US-only, though the US is the clear majority. Microsoft: @SatyaNadella's team disclosed over 2 GW added in FY2025, with roughly 1 GW brought on in the December quarter alone. 400+ data centers globally. Also targeting roughly double capacity by 2027. SemiAnalysis reported that Microsoft paused over 3.5 GW of capacity that would have been built by 2028, though Reuters/TD Cowen put the figure lower at ~2 GW of terminated leases in the US and Europe, and Bernstein says actual cancelled contracts total only "a couple hundred megawatts." The precise number is disputed. The directional point is clear that Microsoft was recalibrating its self-build vs. lease mix but now seems to be building again. Google (Alphabet): @sundarpichai and team guided 2026 capex at $175-185B, nearly double 2025. No explicit "we added X GW" disclosure comparable to AWS. Dylan describes them as "still capacity constrained" and acting fast = buying an energy company, putting down turbine deposits for 2028-29, negotiating long-term power agreements. A large chunk of new capacity is going to TPUs for internal products (Gemini across Search, Android, Workspace) and the Anthropic deal (~1 GW in 2026, ~3.5 GW from 2027). Without a disclosed GW figure, I'd estimate 3-5 GW of 2026 additions based on capex trajectory similar to the other giants. Meta: @finkd guided $115-135B in 2026 capex, nearly double 2025. Building for internal AI workloads (Llama training, inference across Instagram-WhatsApp-Threads) + Meta Superintelligence Labs. 1 GW campus in El Paso (investment scaled from $1.5B to $10B), 1 GW campus in Lebanon, Indiana, JV in Louisiana (~$27B estimated), Prometheus bringing 1 GW online in 2026. On top of the self-build, Meta committed $35.2B to CoreWeave across two deals for third-party capacity. Independent builders and neoclouds @elonmusk's @xai: Colossus 2 in Memphis is targeting 1-2 GW of capacity to support 550,000 next-gen Nvidia chips, scaling to 1 million GPUs. Deployed 35 natural gas turbines generating 420 MW behind the meter to work around grid constraints. @CoreWeave's team added 490 MW across 11 data centers in 2025 (260 MW in Q4). Total active capacity hit 850 MW at year-end against 3.1 GW contracted. Planning $30-35B of 2026 capex. Also acting as lead builder on the 1.2 GW Stargate Abilene campus for OpenAI. @nebiusai: Tracking toward 800 MW - 1 GW of available capacity in 2026. 310 MW facility in Finland. Meta agreed to buy $12B of AI computing capacity from Nebius by 2027, with an option for an additional $15B over five years — up to $27B total. Sense-checking the total - A few different ways to triangulate: Morgan Stanley forecasts ~24 GW of global additions in 2026 * 50-60% US = ~13-14GW. BloombergNEF has something like ~8-10GW of IT Load * 1.4 PUE = ~12-13GW. @climatetech_vc data showing at least 16 GW of US data center capacity slated to come online in 2026 across 140 projects but warns 30-50% may face delays due to power constraints and equipment shortages. Crude capex math: $600-700B in 2026 hyperscaler capex at roughly $40-50B per GW of fully-built capacity also implies mid-teens GW. That's an imprecise conversion as capex covers equipment, data center shells, chips, and land that enter service across different years but it provides another directional anchor. Colliers reported that North American data center absorption hit 15.6 GW in 2025, double the 2024 level. The narrower CBRE primary-colocation-market figure of 2.5 GW only captures a subset of traditional leased space and misses hyperscaler self-build, behind-the-meter neocloud facilities, and training clusters entirely. @EpochAIResearch's frontier data center tracker confirms the step-function: most of the largest campuses (e.g., Meta Hyperion at 2.2 GW, Microsoft Fairwater above 1 GW) don't fully arrive until 2027-2028. It seems a reasonable base case for 2026 US net energized capacity additions: ~15 GW vs. bear case 12-13 GW (permitting delays push energizations into 2027) vs. bull case 18-20 GW (everything announced delivers on schedule). The bucket breakdown: Frontier labs (OpenAI + Anthropic): ~6 GW. Physically built by AWS, Google, Microsoft, CoreWeave, Oracle but contractually dedicated to OpenAI and Anthropic training and inference workloads. ~3 GW incremental per lab. Hyperscaler first-party AI: ~4-5 GW. Microsoft Copilot across 900M MAUs and GitHub Copilot. Google Gemini across Search, Android, Workspace plus DeepMind. Amazon Alexa+ rebuild, internal retail/logistics AI. Meta ad retrieval, recommendations, Llama training. Third-party AI cloud and independent builders: ~2-3 GW. xAI and Meta as external customer of CoreWeave/Nebius. Enterprise builders. Sovereign AI. Inference demand through Bedrock, Vertex, and Foundry APIs. Non-AI cloud + overbuild/commissioning lag: ~2 GW. Traditional enterprise workloads plus power energized ahead of full rack load. Where I'm probably wrong and why the number could be higher than 15 GW ~90% of the incremental build in this framework is AI-related. Only ~1 GW goes to traditional cloud. The most likely source of upside: enterprise inference and cloud AI demand growing faster than this would model. Oracle's remaining performance obligations have exploded to $523B. AWS's non-Anthropic AI business is running at $15B+ ARR. Amazon's custom chip business alone is a $20B+ run-rate. The absorption data supports this. Colliers/Jefferies put North American absorption at 15.6 GW in 2025 = demand is tracking much closer to total additions than people assume. If enterprise adoption of AI APIs is inflecting harder than I'm capturing, the "third-party AI cloud" and "hyperscaler first-party" buckets could each be 1-2 GW bigger, pushing total additions toward 18-20 GW. If the enterprise inference layer is scaling as fast as the hyperscalers are betting (Copilot seats, Gemini in Search, Claude Code adoption, agentic workflows) then 15 GW is conservative and $600B+ in 2026 hyperscaler capex is well supported. Lot of figures and disclosures so I'm sure I slipped up along the way. What did I get wrong? Anything else to include?

English
1
0
5
1.5K
Brandon
Brandon@brandonkumar·
- Anthropic demonstrates early competitive advantage in enterprise - OpenAI tries to catch-up by leveraging its compute advantage and pushing higher/more predictable token limits - In doing so OAI gets distracted from building big consumer business - Competitors with equivalent compute and real ad/consumer distribution experience (Google & Meta) quietly build big consumer businesses through deep integration + effective monetization plays - Anthropic figures out how to get more compute and the throttling stops - Anthropic wins enterprise - Google wins on advertising/consumer scale and Meta carves out its own consumer moat via apps + open-source leverage
English
2
0
10
894
Brandon
Brandon@brandonkumar·
Genius Makers
Brandon tweet media
English
0
0
2
139
Brandon
Brandon@brandonkumar·
Born To Be Wired
Brandon tweet media
English
1
0
4
599
Brandon
Brandon@brandonkumar·
Book recs from the past few years:
English
7
1
66
6.9K
Brandon
Brandon@brandonkumar·
No metric has been more thoroughly discredited in 2026 than ARR
English
3
0
7
701
Brandon
Brandon@brandonkumar·
Remarkable that ERP companies are still raising $100M rounds
English
1
0
3
1.9K
Brandon
Brandon@brandonkumar·
$1.8B run rate, zero VC, 1 FTE
Brandon tweet media
English
18
8
265
63.9K
Brandon
Brandon@brandonkumar·
Roughly one month since v1 launched, a few updates: - Launched rL3, the first in a series of incentive experiments - In partnership with Agora, we launched an AUSD vault, extending Ample to an additional chain and asset ecosystem - Introduced a set of social features, including an expansion of Teams functionality - We will soon enable any-chain deposits into Ample, improving access from Solana and other ecosystems - We’re developing /amplify, a transfer flow that lets users move funds from external protocols into Ample - Mobile app will go live this quarter
Brandon@brandonkumar

x.com/i/article/2028…

English
3
2
21
2.8K
Brandon
Brandon@brandonkumar·
Surely SemiAnalysis can afford better designers
Brandon tweet media
English
0
0
3
337
Brandon retweetledi
Felipe Montealegre
Felipe Montealegre@TheiaResearch·
Brandon is one of the best founders in the industry and he is doubling down on the L3 token. FYI the situation with L3 tells me nobody is looking at the small cap space: legendary CEO, good attempt at PMF with the new product, and trading below cash value.
Layer3@layer3

x.com/i/article/2034…

English
5
5
61
8K
Brandon
Brandon@brandonkumar·
Yahya is an S-tier builder
yahya@yahya_ea

Excited to start sharing more on what we've been building at @illa_labs with our friends at @Wonderland We're entering a world where agents are our general purpose coworkers. They handle more for us every day. We direct, agents execute. But when it comes to financial actions, the problem is everything that happens in between. AI gets things wrong. Tools are fragmented. And real money is unforgiving. ILLA is the system between what you ask an agent to do with your money, and what actually happens. One integration, broad capabilities, safety built in. Early access rolling out now. More soon.

English
0
1
15
1.5K
Brandon
Brandon@brandonkumar·
At the “guaranteed 17.5% return” stage of AI distribution wars
Brandon tweet media
English
3
0
13
1K
Brandon
Brandon@brandonkumar·
> 1800s: Electricity companies offer incentives for factories to run at night > 1970s: Telecom companies introduce nights & weekends pricing > 2026: Model providers offer double usage during off-peak hours
Brandon tweet media
English
0
0
10
587