Nutty

1.7K posts

Nutty banner
Nutty

Nutty

@NuttyCLD

Analog IC Design Engineer in Silicon Valley Writing about circuits, semiconductors & industry Substack: https://t.co/IVnULFqWLI

Cupertino Katılım Ocak 2015
1.6K Takip Edilen16.5K Takipçiler
Nutty
Nutty@NuttyCLD·
@Longviewres I did not mention $CGEH in the article.
English
0
0
2
235
Longview Research
Longview Research@Longviewres·
At least credit other people's work for leading you up to this point and recommending $CGEH. Don't get why you are trying to monetize this. For anyone reading this, I have my free 25 page extensive report on $CGEH coming out this coming Monday, do not pay to read this guy's article that essentially copy/pasted everyone else's research.
English
1
0
9
428
Nutty
Nutty@NuttyCLD·
Just to clarify, I did not mention $CGEH in the article. The piece is not a CGEH thesis. It is about the broader AI power stack: grid constraints, time-to-power, onsite generation, SOFC, nuclear timelines, and power equipment. If $CGEH fits somewhere in that map, I’m happy to look at it separately. But this article was not written as a paid $CGEH write-up or a repackaging of anyone’s thesis.
English
1
0
0
305
Nutty
Nutty@NuttyCLD·
While the market fixates on flashy GPU specs and front-end node shrinks, the real battleground for the next AI hardware cycle has shifted to the very end of the line: Assembly. In the era of 16-Hi HBM4, chiplets, and CPO, ecosystem dominance now depends entirely on how precisely we can stack and bond different dies at the atomic level. Moving beyond superficial ticker lists, @PhotonCap's latest piece isolates the 7 global bonding equipment companies standing at the center of this hidden warzone. From the massive M&A scenarios poised to disrupt the value chain to identifying the purest beneficiaries of the hybrid bonding cycle, this article breaks it all down.
Photon Capital@PhotonCap

x.com/i/article/2050…

English
1
6
28
9.6K
Nutty
Nutty@NuttyCLD·
Optics remains the core sector driving AI infrastructure, but the days of simply chasing keywords are over. A deep understanding of the specific technological 'chokepoints' each company controls within the value chain is now absolutely essential. @damnang2 ’s latest piece sharply analyzes the extreme price decoupling among 22 key optical companies and pinpoints the next M&A targets. As the market aggressively separates the signal from the noise, this dense, highly analytical breakdown serves as an excellent compass for anyone looking to reposition their portfolio.
Damnang2@damnang2

x.com/i/article/2050…

English
1
4
31
6.8K
Nutty retweetledi
Mikael Wåhlin
Mikael Wåhlin@Plaskpojken·
Look at this recent interaction on LinkedIn. Jabil, a $34 Billion Fortune 500 manufacturing titan, publicly confirming their partnership with Sivers Semiconductors ( $SIVE / $SIVEF ). What are they building together? An ultra-low-power 1.6T optical transceiver, powered by Sivers' custom CW-DFB Indium Phosphide lasers. ▪️ The Tech: CW-DFB is the exact high-power laser architecture required to solve the AI datacenter heat and bottleneck crisis. ▪️ The Partner: Jabil manufactures hardware for the biggest tech giants on the planet. When a Tier-1 foundry adopts your laser for their 1.6T module, you have successfully penetrated the global supply chain. ▪️ The Pick & Shovel Play: It doesn't matter if data centers buy custom Co-Packaged Optics or standard 1.6T pluggables from Jabil. $SIVE supplies the light engine for both. Also make sure to read @PepInvestStocks post below, where he highlights the chain to $INTC $SIVE $SIVEF $JBL #DeepTech #SiliconPhotonics #AI
Mikael Wåhlin tweet media
Pep Invest@PepInvestStocks

$SIVE The deal that changes everything 🔥 $JBL $INTC Jabil + Sivers = 1.6T LRO in volume On April 15, 2026, Jabil (JBL) and Sivers Semiconductors announced a strategic partnership. Jabil is developing and manufacturing a 1.6‑terabit Linear Receive Optical (LRO) pluggable transceiver module - and is relying exclusively on Sivers’ high‑power DFB laser chips and arrays. Why is this so crucial? LRO is the logical evolution of Linear Pluggable Optics (LPO): DSP/retiming functions move into the switch ASIC (Broadcom Tomahawk 5/6, etc.). The result: up to 2.5× lower power consumption per bit and significantly less heat. Exactly what hyperscalers (Meta, Google, Microsoft, Amazon, xAI) need for 100k+ GPU clusters to break through power walls and cooling limits. Each individual 1.6T LRO module requires multiple high‑precision InP DFB lasers from Sivers - not commodity parts, but customized, high‑power, low‑noise light sources optimized for silicon photonics and CPO. Scaling at Jabil = direct scaling at Sivers. With a market cap of over USD 27 billion and as one of the largest EMS/supply‑chain partners of the hyperscalers, Jabil has the manufacturing power to bring these modules into the tens of thousands, later hundreds of thousands. If Jabil produces 1.6T LRO in volume, Sivers’ laser demand scales 1:1. The real bottleneck: InP lasers are the new “silicon wafer” of the AI era The AI‑optics market is exploding: Optical interconnects for AI: from ~USD 8.6B (2025) to USD 38B by 2034 1.6T modules alone are expected to exceed 5 million units in 2026 The entire pluggable + CPO market is growing at 20%+ CAGR But here’s the catch: wafer yields for InP lasers are below 30% for many players. Scaling is extremely difficult - it requires years of process expertise, specialized epitaxy, and yield‑ramp know‑how. Sivers has exactly that: one of the few scalable, commercially validated InP platforms.

English
8
31
230
25.1K
Nutty
Nutty@NuttyCLD·
AI did not just create demand for more GPUs. It changed what “cloud” is supposed to do. The old cloud was built to divide resources across many workloads. AI training does the reverse: it stitches thousands of GPUs into one giant job. That gap is where neoclouds entered. Full article: open.substack.com/pub/nuttycld/p…
Nutty@NuttyCLD

x.com/i/article/2049…

English
0
7
37
6K
Nutty
Nutty@NuttyCLD·
@austinhodly Thanks. It was a bit hard to find your DM because it was somewhat hidden. I just replied to you!
English
0
0
1
209
SY
SY@austinhodly·
@NuttyCLD Hi can you check your DM about a your substack subscription issue. Thx!
English
1
0
1
236
Nutty
Nutty@NuttyCLD·
The last 1 cm in AI GPU power delivery is not disappearing. 800V helps the rack. 48V reshapes the board. But the GPU core still needs thousands of amps at less than 1V. That is where the business models begin to diverge. Same current. Five different income statements. Full deep dive is now live: nuttycld.substack.com/p/the-economic…
Nutty@NuttyCLD

x.com/i/article/2049…

English
2
5
57
11.6K
Papa Johns
Papa Johns@SVTrivo·
@Drjames1177 저 아저씨들이 스브스택 안한다고 점심 먹을 때 저를 안 불렀다고 하네요.. 무지 치사하죠 ㅎ 아직 저는 X에만 좀 더 있으면서 생각들 풀어버려구요. 스브스택은 은퇴하고서나... ㅎ
한국어
1
0
3
108
Papa Johns
Papa Johns@SVTrivo·
Introducing a Korean Ph.D. Semiconductor Engineering Crew on X in Silicon Valley 🚀 I'm connecting multiple Korean Ph.D. engineers working at the forefront of semiconductors, photonics, and AI hardware in the Bay Area. Complementary expertise covering the full stack of next-gen challenges (optics, power, test, integration, and execution). Quick funny story though 😂 A few days ago these three had lunch together in downtown San Jose… without me. Reason? “Only people with Substacks are allowed.” Real mature guys, real mature 😠 So I’m officially fixing this situation right now. 1. PhotonCap (@PhotonCap) - Optics/Photonics Expert (Ph.D. EE). Hands-on silicon photonics, PIC design. - Substack: photoncap.net — Deep technical + investment insights on silicon photonics, CPO, optical interconnects, and compound semiconductors. Trusted by quants and engineers (~4.5K+ subs). 2. NuttyCLD (@NuttyCLD) - Analog Semiconductor Expert (Ph.D. EE). Gifted at explaining complex power delivery, interconnects, and physical infrastructure behind AI in clear, accessible ways. - Substack: nuttycld.substack.com — Must-read “AI Power Crisis” series. 3. Damnang (@damnang2) - Design for Testability (DFT) Expert (Ph.D. EE) with broad insights in design/test/packaging/yield. Strong US-KR industry network (~1,000 professionals, community lead). - Substack: damnang2.substack.com — In-depth yet readable analysis on semis, optics, packaging, foundry/test, and company deep-dives (~6K+ subs). 4. PapaJohns (@SVTrivo — me) - Wireless Communication SoC Expert (Ph.D. EE). Long time wide SOC design scope of experience + Engineering Management. - Substack: Comming soon... (My X hasn’t reached the level I want yet, so Substack feels like bad ROI for now 😂 — but if you guys follow me a lot, that day might come sooner 👀) Why this group? Together we cover key AI hardware bottlenecks — from photonics interconnects and power walls to testing/yield and system integration. Expect high-signal discussions, potential collabs, and real insights. If you're in SV and working in semis, photonics, AI infra, or related fields — feel free to reach out or subscribe to them. And next lunch in San Jose? Don’t even think about leaving me out again. Who else should we connect with? Drop suggestions below 👇
Papa Johns tweet media
Papa Johns@SVTrivo

@damnang2 못 본 사람을 위해서.. x.com/damnang2/statu… 이 사람들, 트윗 계정 달고 나 빼놓고 먹어 놓고선.. ㅎㅎ 이런 고단수 😆 X로 넘어오면서 블로그도 방치 상태.. 그때 꼬셔서 블로그와 제 개인도메인을 Subtrack로 넘기고 정리해볼까~라고 생각도 했지만 X하나 수익창출 못하는 처지라, 참고 있음~

English
3
2
15
2.4K
Nutty
Nutty@NuttyCLD·
Good call. It’s not excluded. I’ve placed it in Tier 2 based on my criteria. Tier 1 is where this transition is closest to the company’s core reason for existence, like Bloom SOFC, Vicor VPD, and MPS GPU VRMs. Tier 2 still has strong exposure to AI and 800V, but with broader end markets. HPS.A fits the latter. Its power solutions extend beyond datacenter into solar, EV charging, and industrial drives, so it is less pure-play. That said, within Tier 2, it is still on the higher end in terms of category concentration.
English
1
1
1
104
grib
grib@grib01605511·
@NuttyCLD You’re forgetting the best one: $HPS.A
English
1
0
1
142
Nutty
Nutty@NuttyCLD·
AI power stack is starting to move together. Over the past week, pure-play names across GaN, VRM, SiC, and even nuclear moved higher together — not on a single catalyst, but on a shared realization: power is becoming the bottleneck. This is not just a trade. It’s a structural shift. I’ve been mapping this across the full stack — from grid → 800V → VRM → on-chip → BSPDN — in my AI Power Crisis series. If you’re trying to understand where value actually accrues in AI infrastructure: nuttycld.substack.com/t/power-semico… A quick note: I’ll still publish some free posts, but most major deep-dive pieces going forward will be paid. Pledges are open until April 27, 11:59 PM PT. Paid subscriptions begin April 28, with pricing expected to increase once live. nuttycld.substack.com
Nutty tweet media
English
5
17
114
13K
Nutty
Nutty@NuttyCLD·
The Age of Interconnection. Compute still matters. The recent resurgence of CPUs makes that clear. But today, the constraint is less about compute itself, and more about how that compute translates into system-level performance. At the system level, the bottlenecks are increasingly shifting toward packaging, memory, optics, power, and custom silicon. That’s why HBM, CoWoS, CPO, 800V DC, CXL, Ethernet, and ASICs no longer feel like separate stories, but part of the same structural shift. I’ll keep exploring this through a mix of macro frameworks and company-level deep dives. Substack: nuttycld.substack.com Most deep dives will be paid going forward (pledges open until Apr 27 PT; price increases after launch).
Nutty tweet media
Nutty@NuttyCLD

x.com/i/article/2046…

English
1
14
76
13.1K
Nutty
Nutty@NuttyCLD·
Good point. I agree that from NVIDIA’s perspective, vendor dependency and control make HBF a much more complicated decision. That said, as we’ve seen before, when physical constraints and workload pressure build up, some level of compromise tends to happen. So I still think the door remains open, even if cautiously. I actually wrote a piece earlier where I go into my view on HBF in more detail — feel free to check it out if you have time. x.com/NuttyCLD/statu…
English
0
0
1
54
David Liao
David Liao@DavidLiaoCH·
You misunderstood my question. HBF is not going to replace HBM. I have doubts whether NVDA will even add it in addition to HBM. It creates too much dependency on SK Hynix and $SNDK. They may create their own HBF standard and force the NAND makers to supply just the storage while owning the protocol and the interface. HBF might have an easier time finding its way into TPUs or other inference ASICs.
English
1
0
1
86
Nutty
Nutty@NuttyCLD·
Thanks. Great question. My current view is that HBF is more likely to be a complement to HBM, not a replacement, at least in the near term. For NVIDIA, the natural use case would not be the hottest model weights or activations. Those still need HBM. HBF could be interesting for colder or semi-hot data layers, especially as inference memory pressure grows. So I’d frame it less as “NVDA replacing HBM spend” and more as adding another memory tier if the cost/performance tradeoff becomes compelling.
English
1
0
0
119
David Liao
David Liao@DavidLiaoCH·
@NuttyCLD Good article. You mentioned HBF a couple of times. What do you think are the chances that $NVDA will adopt it? NVDA is already paying a lot to the HBM makers. If they adopt HBF, they will pay even more.
English
1
0
1
143
Nutty
Nutty@NuttyCLD·
Honestly, this is a question that's hard to answer in a single reply. SMR's recent weakness isn't really an SMR story. It's a time-to-power story. Hyperscalers right now are solving the can we power this cluster in 2026-2027 problem, not the 2030 problem. Gas turbines deliver in 18-24 months. Bloom's SOFCs in 90 days, with the latest hyperscale AI factory order delivered in 55. SMRs, even in the most optimistic scenario, sit in 2030+. The market is pricing that time gap right now, not the long-term thesis. Too good a question to leave at a reply, so I'll be writing a piece on it soon. The AI Power Crisis Side Story: Not Waiting for the Grid. Heads up that this one will be behind the Substack paywall. Appreciate the question. nuttycld.substack.com
Peragito@PeragitoTrader

@NuttyCLD Why is $SMR doing so poorly?

English
4
4
33
5.9K