prefshares

6.5K posts

prefshares banner
prefshares

prefshares

@PrefShares

Curious.

Se unió Mart 2016
1.9K Siguiendo15.9K Seguidores
prefshares
prefshares@PrefShares·
I do not look at drawdowns from ATHs as a shorthand for how attractive the investment environment is. My investable universe is the market of stocks, not the stock market. It’s important to underwrite returns bottom-up.
Ozark@cryptobyrde

@PrefShares you can say this when SPX -15%, not -5%

English
1
0
26
4.1K
prefshares
prefshares@PrefShares·
If you have a time horizon longer than 2 years this feels like a pretty attractive buying opportunity for a lot of dislocated high quality names. We easily could go lower. Oil could easily go to $150. However — can’t invest that way. If you’re focused on LT earnings power, I don’t this is taking us off trend from an economic growth perspective.
English
10
2
123
55.6K
prefshares retuiteado
HeyMan800
HeyMan800@HeyMan800·
Big shout out to the activists at Elliott Management that convinced Southwest Airlines management to stop hedging fuel costs.
English
82
373
7.3K
556.7K
prefshares
prefshares@PrefShares·
The Zeitgeist.
prefshares tweet media
Deutsch
4
2
66
6.8K
prefshares
prefshares@PrefShares·
part of me feels that it’s easier to say that power/construction availability is less of a bottleneck because theoretically it can come online where as advance packaging/memory/N3 capacity all quantifiable but my intuition is that DC capacity being throttled by (harder to pen down quantitatively) human bottlenecks are what will ultimately prevent us from building mid-20s GWs of AI DC capacity next couple of years… at the margin feels bullish for the hyperscalers
English
1
0
1
76
Shanu Mathew
Shanu Mathew@ShanuMathew93·
@PrefShares latter, not sure there's any one good piece on it. just continued references to the shortage of skilled labor across reports and calls
English
1
0
0
78
Shanu Mathew
Shanu Mathew@ShanuMathew93·
Podcast is long and info-dense (thanks both!) but worth a listen or transcript skim. Parts that stood out to me - especially on capacity, unit economics, residual value, and bottlenecks. Separate power post (after) to follow. -Lab capacity today and forward: Both OpenAI and Anthropic are at roughly 2 to 2.5 GW today. Dylan estimates both reach 5 to 6 GW by year-end, with OpenAI slightly higher. Both targeting around 10 GW by end of next year. -Anthropic revenue and implied compute need: Anthropic has been adding $4 to 6B in monthly revenue per Dylan's estimates. Straight-line that over 10 months and you get roughly $60B of incremental revenue. At sub-50% gross margins (per The Information), that implies around $40B of compute spend. At roughly $10B per GW in rental cost, that is 4 GW of new inference capacity needed just for revenue growth before any training fleet expansion. -Procurement strategy divergence: Anthropic was deliberately conservative on compute contracting. OpenAI signed aggressively and has better access to capacity into year-end. Anthropic now has to acquire capacity through Bedrock / Vertex / Foundry revenue-share arrangements or spot deals at steep premiums (Dwarkesh suggested 50% margins to the hyperscaler CSPs). Dylan has seen labs sign H100 deals at $2.40/hr for 2 to 3 year terms vs. a $1.40/hr fully loaded 5-year TCO. Standard 5-year contracts at $1.90 to $2.00 yield roughly 35% gross margins. Late-cycle short-duration contracts yield dramatically more for the provider. -Supply chain conviction decay: Labs know they need X compute. Nvidia builds X minus 1. Each layer down the supply chain builds X minus 1 again, sometimes X divided by 2. Conviction about demand attenuates at every step. Anthropic's compute team (ex-Google) spotted a dislocation and negotiated roughly 1M TPU v7s before Google leadership realized the demand. Google then went to TSMC asking for emergency capacity and was told they were sold out. -GPU depreciation thesis: Bears argue H100 spot falls to $1.00 when Blackwell scales and $0.70 when Rubin scales. Dylan argues the opposite. GPT-5.4 is cheaper to run than GPT-4, has fewer active parameters, and is far more capable. An H100 produces more tokens of a better model than it ever could before. TAM for GPT-4 tokens was maybe low billions to tens of billions. GPT-5.4 TAM is "probably north of $100B." His direct quote: "An H100 is worth more today than it was three years ago." In a supply-constrained world, GPU value is set by marginal output value, not replacement cost. -Memory crunch: Roughly 30% of Big Tech 2026 AI CapEx goes to memory. Vendors were unprofitable in 2023 and did not build fabs. Even after the demand surge became foreseeable, it took a year for pricing to move, another 3 to 6 months for vendors to react, and fabs take 2 years to build. Meaningful relief likely does not arrive until late 2027 or 2028. DRAM has roughly tripled in price. He argues this spills into consumer electronics with significant BOM pressure on smartphones, though his volume decline projections (from 1.1B to 500 to 600M units) are on the more aggressive end. -Long-run bottleneck: By 2028 to 2029, Dylan believes the binding constraint shifts to ASML EUV tools. Currently producing around 70 per year, growing to roughly 100 by end of decade. Each gigawatt of AI capacity requires about 3.5 EUV tools. That is $1.2B of tooling supporting $50B of downstream data center CapEx. The supply chain simply cannot scale fast enough.
Dwarkesh Patel@dwarkesh_sp

.@dylan522p lays out how we know the hard upper bound on how much compute can be produced annually by 2030: around 200 GW/year. That’s a crazy number (there’s about 20 GW of AI deployed in the world right now), but it’s nowhere near enough to satisfy Sam/Elon/Dario/Demis’s ambitions. Lots of things in the supply chain can be scaled up over 4 years, including things that other people think are bottlenecks, like datacenter power or fab clean room space. But the thing that’s inflexible over that timeline is the number of EUV tools. Dylan forecasts that production of ASML’s EUV tools will scale from 60 per year now to about 100 per year by the end of the decade - which means something like 700 total machines running in 2030. For a fab to make a GW worth of the Rubin chips that NVIDIA is deploying later this year, it needs to make 55,000 3nm wafers, 6,000 5nm wafers, and 170,000 memory wafers. Each 3nm wafers needs about 20 EUV passes, so about 1.1 million passes per GW. Adding on 5nm and memory, you need two million passes. Each tool can do 75 passes per hour, so with 90% uptime that’s around 600k passes per year - so a single machine can make less than a third of a GW in a year. So in 2030, we have 700 total machines, each making 0.3ish GW a year, which means we can produce 200 GW of compute a year. That’s a lot. But Sam Altman wants a gigawatt a week by the end of the decade. Anthropic and Google will be wanting about the same. And Elon wants to be putting 100 GW in space every year. Any one of these players could maybe get what they need, but not all of them.

English
5
11
85
42.8K
prefshares
prefshares@PrefShares·
@ShanuMathew93 What's the best thing you've seen/read/listened to on sizing the labor constraint? Or are you just building a mosaic of multiple datapoints (which is what I've been doing, as I agree with this view)?
English
1
0
0
78
Shanu Mathew
Shanu Mathew@ShanuMathew93·
Dylan basically says power is not easy to scale but not the binding constraint on AI scaling. Semiconductors and tooling are the harder ceiling. [My thoughts are in italics after each summary bullet] -Current state: AI data centers are 3 to 4% of the US grid today, projected to hit 10% by 2028. The US grid is terawatt-scale. [Restating the EPRI estimate] -Supply diversity is much broader than people think: Most analysis focuses on combined-cycle gas turbines from three manufacturers (GE Vernova, Mitsubishi, Siemens). Dylan tracks 16 different gas-powered generation manufacturers. Beyond combined-cycle, there are aeroderivative turbines (jet engines converted), medium-speed reciprocating engines (Cummins-type), ship engines (Nebius is running these for a Microsoft data center in New Jersey), Bloom Energy fuel cells, and solar plus battery. His claim: "Any of these individually will do tens of gigawatts, and as a whole, they will do hundreds of gigawatts." [Maybe not as bullish on each doing 10s of GWs but I've been screaming this for years now, CCGT turbine wait times don't define the entire portfolio of power solutions!] -Behind the meter is the dominant mode going forward: By end of decade, roughly half of new capacity additions will be behind the meter. More expensive than grid-connected, but avoids permitting and interconnection queue bottlenecks [Seems crazy high to me, maybe if the largest projects for the frontier labs all opt for BTM you get in the 25-50% range by GW but much lower in terms of # of projects. I still feel like Grid needs to figure it out before it gets to that level] -Cost sensitivity is low: Even if generation CapEx doubles from $1,500/kW to $3,500/kW, GPU TCO only moves from $1.40 to $1.50/hr. The marginal utility of the models being served is worth far more than a ten-cent increase in energy cost. [Truth - have modeled this out before at Datacenter level, will share math on GPU rental level but it just does not move the needle in terms of TCO despite it being nominally large for power people] -Grid unlock: The grid is sized for peak usage on the hottest day of summer. That peak is 10 to 20% above average load. If you deploy enough utility-scale batteries or peaker plants to absorb that spike, you unlock roughly 20% of the existing US grid for data centers. That is hundreds of gigawatts of capacity that is currently sitting idle most of the year. [No surprise that I'm hopeful/bullish for batteries improving grid utilization and assisting with peaks] -Labor: Labor is a real constraint. Crusoe's 1.2 GW Abilene site had roughly 5,000 workers at peak. Scaling to 100+ GW per year implies hundreds of thousands of workers. The mitigation is modularization: pre-integrated power, cooling, and server blocks assembled in Asian factories, shipped as megawatt-scale units, drastically reducing on-site electricians and plumbers. [Correct, have been screaming this one for a while too and it's the harder bottleneck to overcome. You can modularize aspects of manufacturing (e.g., pre-fab MEP skids) but it's still early and there are limits to how much you can do.]
English
3
5
22
3K
prefshares
prefshares@PrefShares·
Spent back-to-back days going through Nvidia and Vulcan Materials conferences. One turns silicon into global intelligence worth trillions. The other sells rocks for ~$20/ton and can’t ship 'em 25 miles without killing the economics. Both great businesses. $NVDA $VMC
English
0
1
23
3.7K
prefshares
prefshares@PrefShares·
@hereisramji @thogge “If options aren’t a form of compensation, what are they? If compensation isn’t an expense, what is it?”
English
0
0
1
64
Ramji Srinivasan b/ruv
Ramji Srinivasan b/ruv@hereisramji·
Bezos on why Free Cash Flow is the metric that matters: "Why not focus first and foremost, as many do, on earnings, earnings per share or earnings growth? The simple answer is that earnings don’t directly translate into cash flows, and shares are worth only the present value of their future cash flows, not the present value of their future earnings. Future earnings are a component—but not the only important component—of future cash flow per share. Working capital and capital expenditures are also important, as is future share dilution." sec.gov/Archives/edgar…
English
4
0
8
1.4K
tyler hogge
tyler hogge@thogge·
One thing is 100% clear to me: With very few exceptions, tech cos are some of the worst operated businesses in the history of capitalism exhibit A: Atlassian -25 yrs old -hasn’t been profitable for a decade -promoted an engineer the same day he was laid off This is the rule.
tyler hogge tweet media
English
37
37
1K
142.6K
BuccoCapital Bloke
BuccoCapital Bloke@buccocapital·
Can you guys just tell me when I can buy Apollo? Those guys seems like absolute psychos, so I’d love to have them fighting just at the edge of legality and ethics for my money, it just seems deliberately confusing to figure out what’s going on Thanks
English
32
16
923
156.5K
prefshares
prefshares@PrefShares·
$ADBE to put up the cleanest quarter in a year or two, but announced Shantanu is stepping down. Stock down 7% AH. Tough software tape.
English
7
3
104
12.2K
prefshares
prefshares@PrefShares·
What about the fact that TPUs & Trainium potentially providing better TB/s/W/$? And agree on Nvidia having secured its supply chain for the next ~2 years — but that feels more like a near/medium term benefit and not necessarily a long-term moat. Do those same advantages hold when HBM and wafer capacity comes online in 2027/8?
English
1
0
0
487
SouthernValue
SouthernValue@SouthernValue95·
What is used first for training is used later for inference. NVDA should provide superior economics (better performance per dollar per watt) and a longer useful life. Dynamo also a differentiator on inference side. Also get the benefit of NVDA managing supply chain risk. Still king. But it is important to have a second source.
SouthernValue tweet media
English
2
0
5
1.2K
prefshares
prefshares@PrefShares·
I mean realistically isn’t the bear case that as the path upon which we scale has narrowed (transformer architecture), the need for flexibility has diminished… and as the buyers of compute are now paying tens of billions a year for that compute (& software has gotten easier than ever to write) — everyone has the incentives and the ability to make custom silicon/ASICs work for inference workloads? And therefore Nvidia’s moat in inference is significantly lower than it is for training? Oh, and by the way, the hyperscalers can self-preference and use their own chips for the inference running over their own model gateways (Bedrock, Foundry, Vertex)? Oh, and also, the most successful frontier lab in the world is using those custom ASICs > Nvidia chips.
Za@ZaStocks

The $NVDA numbers are getting so big and absurd and execution has been so strong that the only real bear case left is “how much bigger can it get?” or “how much capex is left?” Which probably means it eventually goes higher than most people expect.

English
2
1
14
6K