
To.the.mooooon
1.3K posts



If $NVDA wins, $TSM wins If $AVGO wins, $TSM wins If $MU HBM wins, $TSM wins If $SNDK HBF wins, $TSM wins If $AMD AI chips win, $TSM wins If $QCOM silicon win, $TSM wins If $AAPL silicon win, $TSM wins If hyperscalers ASIC wins, $TSM wins Is that clear enough? TSMC All In

$MU AMA: The Best Two Questions (Related) "How could Micron avoid the age old cycle of capacity expansion-over supply-ASP collapsing?" @yizheng95 "It seems natural that acceptable P/E ratios will expand. What are your thoughts on this?" @melone3710 My Refined Answer: Micron is transitioning from cyclical commodity memory (DRAM/NAND) to a premium, AI-centric platform with HBM, CXL, and SOCAMM, brand new products that never existed before. This gives far more wafer-allocation flexibility, flattening booms/busts and justifying PE expansion beyond historical foward single-digit valuations. AI are much more memory hungry than any other tech invention and right now it's mostly enterprise demand. The consumer demand hasn't even kicked in, YET and with rate cuts in the next few years, the consumer demand will also go up. Why Now Different? More Robust Porfolio Legacy: DRAM + NAND → Consumer + Enterprise mix → Limited flexibility, sharp cycles. New Reality: HBM (high-bandwidth for AI GPUs) + CXL (memory pooling/expansion) + SOCAMM (efficient AI-server modules) → Heavy enterprise/AI focus → Much higher margins and agility to shift production. Broader portfolio lets Micron redirect wafers from softening areas (e.g., consumer NAND) to Regular Enterprise NAND or HPC HBM AI without big ASP hits → More stable earnings. Thesis: Cycle Flattening = Valuation Upgrade: More memory for you but also for 8 billion people on earth: Your lifelong AI companions or agents will need to remember decades of inquiries so when you ask about a travel or a problem, they should be able to access your travel inquiry made 15 or 30 years ago in a similar location and formulate the best suggestion based on all the context available (memory). Now multiply that by 8 billion people on earth. AI Inference will not just demand more memory but it will demand a unprecedented transformation in the memory industry overall. What's up with HBM? -HBM TAM is exploding: ~$35B in 2025 → ~$100B by 2028 (40% CAGR, two years faster than prior forecasts). -Micron's entire 2026 HBM supply is sold out (including HBM4 in volume production early), with multi-year contracts locking in pricing/visibility. -HBM consumes 3-4x times more wafer per bit. -Memory is hardware and not software so we can't make more memory in just weeks. We have to build new fabs and that can take 5 to 10 years. So good luck waiting for 3-4x more fabs "all of the sudden." Like every real AI enablers, 20–25x+ forward P/E isn’t just acceptable; it’s inevitable.

What if hyperscaler capex is generational moat building from disciplined capital allocators












Acquiring my first Punk was not on my bingo card for this year… but here we are. Owning this piece of history has been a goal since I got into crypto. I can’t believe it’s real. Ty @punksOTC for the assistance (and to Fartcoin for going up a lot) Finally. #2410 is home.






