longDRAM123

8 posts

longDRAM123

longDRAM123

@metalonger

long it

Austin, TX Katılım Ocak 2026
81 Takip Edilen20 Takipçiler
tae kim
tae kim@firstadopter·
What if, hypothetically .. hear me out .. there was a developed capitalist democracy with a plethora of companies trading at single-digit P/E multiples, incredible growth rates, and U.S. retail investors just recently gaining access to the country?
HH@RealHerbHoover

I’ve seen no less than 7 tweets tonight pumping random global microcap semi conductor adjacent companies. Never seen anything like this in terms of the mania extending to foreign markets. Speculators are getting less and less sophisticated.

English
5
6
103
35.8K
longDRAM123
longDRAM123@metalonger·
@MarcosMillaYT Amazing article, can you go deeper into valuation for the memory industry?
English
0
0
1
41
Trade Whisperer
Trade Whisperer@TradexWhisperer·
$MU $DRAM $TSM $NVDA $SNDK JUST IN. SemiAnalysis published that NVIDIA & TSMC are charging too little relative to exploding AI value. Vera Rubin at baseline: 15.3% IRR. Vera Rubin after a 40% price hike: 38% IRR. Now, buyers do NOT walk away at 38%. They lock in long-term contracts just to secure access. That gap between 15.3% and 38% is a pricing opportunity. TSMC has the same lever: N3 goes from 9% AI in 2025 to 90% AI in 2027. Utilization exceeds 100% in H2 2026. Non-AI crowded out completely. Now add memory. DRAM fabs already above 90% utilization. NAND prices up 70% QoQ. DDR5 up 63% QoQ. Samsung shortening contracts because pricing power has shifted back to suppliers. When you run at 90%+ utilization with no meaningful supply relief in sight, you do not discount. You raise prices. NVDA charges too little. TSMC charges too little. Memory is already correcting that mistake. Token demand is compounding. Agentic AI Invasion Incoming. Been saying this since February. Bullish 🔥
Trade Whisperer tweet media
Trade Whisperer@TradexWhisperer

$MU $SNDK Agentic AI Inference (Not just Inference). What It Means for Memory. Normal inference: memory needed for milliseconds. Agentic inference: memory needed for hours or days. What is Agentic AI? Regular AI answers a question and forgets everything. Agentic AI doesn't reset. It plans, reasons, remembers, and executes across hours or even days. Think of it as an AI employee that never clocks out. Result? HBM fills up. Demand spills to Server DRAM. Then NAND. The entire stack saturates at once. The outlook: Agentic AI is just getting started. Microsoft, Google, every hyperscaler is building toward always-on AI agents embedded in enterprise workflows. The memory footprint isn't growing linearly. It's compounding. Bullish 🔥

English
9
30
198
32.7K