
Rittenhouse Research
1.6K posts

Rittenhouse Research
@RHouseResearch
Software, digital infrastructure, fintech, crypto.



This Friday after the close we are back with another Happy Hour. Grab a 6-pack and join @hkuppy as he sits down with Freddy Brick, partner at @muddywatersre. Freddy joined Muddy back in 2014, right in the thick of one of the most interesting stretches in the short-selling business, and before that, came from APAC activist powerhouse, Oasis.












That Ellison boy is in way over his head




Podcast is long and info-dense (thanks both!) but worth a listen or transcript skim. Parts that stood out to me - especially on capacity, unit economics, residual value, and bottlenecks. Separate power post (after) to follow. -Lab capacity today and forward: Both OpenAI and Anthropic are at roughly 2 to 2.5 GW today. Dylan estimates both reach 5 to 6 GW by year-end, with OpenAI slightly higher. Both targeting around 10 GW by end of next year. -Anthropic revenue and implied compute need: Anthropic has been adding $4 to 6B in monthly revenue per Dylan's estimates. Straight-line that over 10 months and you get roughly $60B of incremental revenue. At sub-50% gross margins (per The Information), that implies around $40B of compute spend. At roughly $10B per GW in rental cost, that is 4 GW of new inference capacity needed just for revenue growth before any training fleet expansion. -Procurement strategy divergence: Anthropic was deliberately conservative on compute contracting. OpenAI signed aggressively and has better access to capacity into year-end. Anthropic now has to acquire capacity through Bedrock / Vertex / Foundry revenue-share arrangements or spot deals at steep premiums (Dwarkesh suggested 50% margins to the hyperscaler CSPs). Dylan has seen labs sign H100 deals at $2.40/hr for 2 to 3 year terms vs. a $1.40/hr fully loaded 5-year TCO. Standard 5-year contracts at $1.90 to $2.00 yield roughly 35% gross margins. Late-cycle short-duration contracts yield dramatically more for the provider. -Supply chain conviction decay: Labs know they need X compute. Nvidia builds X minus 1. Each layer down the supply chain builds X minus 1 again, sometimes X divided by 2. Conviction about demand attenuates at every step. Anthropic's compute team (ex-Google) spotted a dislocation and negotiated roughly 1M TPU v7s before Google leadership realized the demand. Google then went to TSMC asking for emergency capacity and was told they were sold out. -GPU depreciation thesis: Bears argue H100 spot falls to $1.00 when Blackwell scales and $0.70 when Rubin scales. Dylan argues the opposite. GPT-5.4 is cheaper to run than GPT-4, has fewer active parameters, and is far more capable. An H100 produces more tokens of a better model than it ever could before. TAM for GPT-4 tokens was maybe low billions to tens of billions. GPT-5.4 TAM is "probably north of $100B." His direct quote: "An H100 is worth more today than it was three years ago." In a supply-constrained world, GPU value is set by marginal output value, not replacement cost. -Memory crunch: Roughly 30% of Big Tech 2026 AI CapEx goes to memory. Vendors were unprofitable in 2023 and did not build fabs. Even after the demand surge became foreseeable, it took a year for pricing to move, another 3 to 6 months for vendors to react, and fabs take 2 years to build. Meaningful relief likely does not arrive until late 2027 or 2028. DRAM has roughly tripled in price. He argues this spills into consumer electronics with significant BOM pressure on smartphones, though his volume decline projections (from 1.1B to 500 to 600M units) are on the more aggressive end. -Long-run bottleneck: By 2028 to 2029, Dylan believes the binding constraint shifts to ASML EUV tools. Currently producing around 70 per year, growing to roughly 100 by end of decade. Each gigawatt of AI capacity requires about 3.5 EUV tools. That is $1.2B of tooling supporting $50B of downstream data center CapEx. The supply chain simply cannot scale fast enough.

What ever happened to quantum computing



We issued a response to an inaccurate short seller report. Read more here: investors.sofi.com/news/news-deta…











