Fabricated Knowledge

11.8K posts

Fabricated Knowledge banner
Fabricated Knowledge

Fabricated Knowledge

@fabknowledge

Simplifying the world of semiconductor investing in the age of AI. Part of the @semianalysis_ gang.

Joined Eylül 2021
812 Following29.1K Followers
SemiAnalysis
SemiAnalysis@SemiAnalysis_·
Olympian Gold Medalist Alysa Liu, recently went viral for her Teen Vogue rant on OpenAI Codex. “I can see why Sam Altman open sourced Codex. Clearly the experience is significantly worse than Claude Code. I was unable to feel the AGI using Codex. As oppose to using Claude Code, I felt the enlightenment coming and support UBI ”
SemiAnalysis tweet mediaSemiAnalysis tweet media
English
46
33
914
143.9K
FundaAI
FundaAI@FundaAI·
We're glad that we wrote the report when $TSEM was near its bottom, and published it before Serenity and SemiAnalysis. +40% in two weeks!
FundaAI@FundaAI

Deep| $TSEM: SiPho Capacity Inflection Drives Multi-Fold Growth Cycle AI data center compute clusters are currently scaling from thousands of GPUs to tens or even hundreds of thousands of nodes. At this magnitude, traditional copper interconnects are reaching severe physical limits; once transmission rates hit 800G and above, transmission reach shortens dramatically while power consumption escalates exponentially. To bypass these constraints, Silicon Photonics (SiPho) is becoming the essential backbone for AI Networking. As of 4Q25, Tower Semiconductor’s Silicon Photonics business has emerged as the company’s primary growth engine. Revenue doubled from $106mn in 2024 to $228mn in 2025, achieving an annualized revenue run rate exceeding $360mn by the end of 2025. As the industry transitions from 400G/800G toward 1.6T, Tower has positioned itself as the lead supplier of 1.6T Silicon Photonics wafers. We believe Tower is currently the premier SiPho PIC (Photonic Integrated Circuit) foundry with a distinct competitive lead. Among major competitors, Malaysia’s Silterra lacks significant expansion capacity, while SiPho offerings from UMC, GlobalFoundries (via the AMF acquisition), and STM still trail Tower by a wide margin. The TDP (Thermal Design Power) of AI server racks, such as the Nvidia GB200 series, has jumped from 700W in the Hopper generation to over 1,200W, necessitating the adoption of liquid cooling and more efficient optical interconnects. Within these environments, SiPho facilitates higher speeds while maintaining system scalability under strict thermodynamic limits. On February 5, NVIDIA and Tower Semiconductor established a strategic partnership focused on high-speed optical interconnects for AI data centers. Tower will leverage its SiPho process platform to manufacture 1.6T-class SiPho optical engines and modules for NVIDIA’s next-generation networking architecture, optimized for NVIDIA’s specific protocols. This collaboration aims to resolve bandwidth and energy efficiency bottlenecks during the Scale-out phase of massive GPU clusters. Separately, we have highlighted the rapid progression of Optical Scale-Up, with volume production expected to commence in 2027. Delivering over 10x the optical bandwidth of traditional Scale-Out, Optical Scale-Up—whether implemented via pluggable modules, NPO, or CPO—will significantly drive demand for SiPho PICs. Alibaba’s UPN512 (a 512-xPU optical scale-up super-node) validates the migration of optics from scale-out networking into the scale-up core domain, as LPO/NPO and other near-packaged solutions achieve system-level economics. Consequently, optics is evolving from a mere bandwidth expansion tool into a foundational infrastructure component for scale-up architectures. For SiPho, this shift directly expands the long-term TAM. Scale-up environments demand higher port densities, extreme bandwidth, and stricter power budgets—requirements natively addressed by high-integration SiPho PICs and linear drive solutions. SiPho’s penetration is moving beyond “incremental replacement” to potentially becoming the default interconnect standard for next-generation AI super-nodes. Detailed Report open.substack.com/pub/fundaai/p/…

English
4
1
52
22.7K
Irrational Analysis
Irrational Analysis@insane_analyst·
Jukan@jukan05

Amazon's Trainium 3 rumors clash with supplier optimism Rumors circulating in the semiconductor market suggest that Amazon may scale back shipments of its upcoming Trainium 3 artificial-intelligence chip after internal tests reportedly showed performance falling short of expectations. Yet suppliers involved in the chip's components say they have received no such notice and continue to prepare for a rapid ramp-up in production beginning in the second quarter of 2026. A rising tide for AI ASICs Custom AI chips designed for specific workloads — known as application-specific integrated circuits, or ASICs — are expected to be one of the main drivers of growth in the AI server market this year. Trainium 3, developed by Amazon's cloud unit Amazon Web Services, is widely viewed as a key product in that push, especially after Google introduced its seventh-generation Tensor Processing Unit late in 2025. Recent industry chatter has suggested that Trainium 3 may generate AI tokens at a higher cost than competing chips. According to these reports, Amazon could trim shipments of Trainium 3 while boosting demand for an interim chip known as Trainium 2.5, and accelerate development of the next-generation Trainium 4. Trainium 4 had originally been expected to begin limited production in late 2027, with larger-scale output in 2028. Supply chain intact, ramp on track Companies involved in Amazon's ASIC server supply chain include Taiwan-based system assembler Wiwynn, networking equipment provider Accton Technology, cooling specialists Asia Vital and Cooler Master, thermal module maker Microloops, server rail manufacturer King Slide Works, power supplier Delta Electronics, and connector maker BizLink Holding. Executives within the supply chain say they have not been informed of any plan to cut Trainium 3 shipments or raise orders for Trainium 2.5. Instead, preparations are underway for a strong production ramp beginning in the second quarter of 2026, with the chip expected to become a key growth driver in the second half of the year. Wiwynn recently told investors that shipments of AI servers would rise significantly in the latter half of 2026, with ASIC-based systems leading the increase. Cooling manufacturers are also counting on the shift. Auras Technology said servers using ASIC accelerators accounted for roughly 20% to 30% of its revenue in 2025, but shipments are expected to accelerate sharply starting in the second half of 2026. By 2027, Auras said, ASIC servers could generate more revenue than those based on graphics processors. Jassy bullish on Trainium's trajectory Amazon executives, meanwhile, have publicly expressed confidence in demand for the new chip. During a recent earnings call, Chief Executive Andy Jassy said Trainium 3 would deliver about 40% better price-performance than its predecessor, Trainium 2, and that customer interest was strong. By around the middle of 2026, he said, nearly all available supply is expected to be reserved. Jassy added that development of Trainium 4 — expected to debut around 2027 — is already underway and drawing significant attention from customers. Discussions have even begun about a future Trainium 5. Combined with Amazon's in-house server processor, AWS Graviton, the company's custom chip business already represents a market exceeding US$10 billion in annual revenue, he said, and is still in its early stages. ASIC shipments closing the gap on GPUs According to estimates by DIGITIMES Research, shipments of high-end AI ASIC accelerators are projected to reach 5.13 million units in 2025 and 7.23 million in 2026. That remains below shipments of high-end graphics-processing accelerators — projected at 6.52 million and 7.99 million units in those years, respectively. But ASIC chips are expanding far faster. Shipments of AI ASIC accelerators are expected to grow more than 40% annually in both 2025 and 2026, compared with growth of just over 20% for GPU accelerators. As a result, while GPU-based servers still dominate the market, their shipment growth rate is slowing — from 29.6% in 2025 to an expected 22.6% in 2026. DIGITIMES Research projects that faster growth in high-end AI ASIC shipments will be a major force driving the next phase of expansion in the global AI server supply chain.

QAM
2
0
27
7K
Fabricated Knowledge retweeted
SemiAnalysis
SemiAnalysis@SemiAnalysis_·
We mapped Felten et al.'s AI exposure scores onto BLS consumption data by income quintile. The top 20% of households have 29% more of their spending basket exposed to AI-driven cost reductions than the bottom 20%. AI deflation has a distributional problem. The mechanism is financial services. Securities, insurance, credit intermediation — the most AI-exposed sector in the economy — are 17.7% of Q5's budget and 2.1% of Q1's.
SemiAnalysis tweet media
English
5
5
36
8K
Fabricated Knowledge retweeted
zach
zach@zachleft·
zach tweet media
ZXX
269
18.9K
150.9K
2.1M
Fabricated Knowledge retweeted
Irrational Analysis
Irrational Analysis@insane_analyst·
Nevermind Nvidia did show something new at OFC. COUPE in 1.6T tranceicer. e-14 BER on 7/8 200G PAM4 lanes. I am practically speechless. Hitting e-14 with all lanes active is borderline impossible. This is wonder of engineering. Also TFC is partner, cope harder $HIMX bulls. 🤡
Irrational Analysis tweet mediaIrrational Analysis tweet media
English
14
17
160
20.9K
TBU
TBU@TBU12345678·
$NVDA adding capital returns to their thesis and completing their transformation into the new $AAPL
GIF
English
3
1
32
2.5K
Fabricated Knowledge retweeted
Max Hager
Max Hager@yachty66·
Introducing the worlds first GPU knife
Max Hager tweet media
English
2
1
22
2.5K
Fabricated Knowledge retweeted
Dean Takahashi
Dean Takahashi@deantak·
At press q&a, Jensen Huang was asked if the $20 billion Groq deal meant Nvidia was slow to shift from AI training to inference. He replied, “First of all, you are looking at the inference king.”
Dean Takahashi tweet media
English
17
19
405
48.3K
Fabricated Knowledge retweeted
Jay Yoon
Jay Yoon@jaysyoon·
$LITE just disclosed a multi year, multi billion dollar agreement signed yesterday, as well as guided for >$1Bn run rate for OCS in 2027. Winners keep winning.
English
5
13
178
22.4K
Fabricated Knowledge
Fabricated Knowledge@fabknowledge·
God the CCOI waves thesis is so cope, like galactic levels actually
English
0
0
4
3.7K
Fabricated Knowledge retweeted
SemiAnalysis
SemiAnalysis@SemiAnalysis_·
Your next GPU cluster just got 30% more expensive and it's not even NVIDIA's fault this time
English
2
12
87
16.5K
Fabricated Knowledge
Fabricated Knowledge@fabknowledge·
@buccocapital The time period I’m referring to is a very different one, they had a bit of a lost decade or 2 and a turnaround! Platform shifts matter. Look the world takes longer than hype, but idk I’m a bleeding edge best of breed. Fighting the fade ain’t my strength
English
1
0
1
1.1K
BuccoCapital Bloke
BuccoCapital Bloke@buccocapital·
@fabknowledge Yup. I am not pounding the table for fucking Salesforce lmao. But…they’re not dead yet. Long game Also…IBM beat QQQ over the last five years. By A LOT
English
1
0
6
938
Fabricated Knowledge
Fabricated Knowledge@fabknowledge·
People keep thinking that their distribution (think MSFT, or CRM) is bigger than the technology (AI), but the fact that people are buying macs to mess around with cc (they are also good machines lol) tells me that this technology is much bigger than any distribution this time
English
8
2
96
22.1K
Fabricated Knowledge
Fabricated Knowledge@fabknowledge·
I am going to just pus that distribution / friction is THE ONLY thing that matters if we are going to be software brained. I do understand and appreciate it, but I do think this is a big thing and making it all abt distribution is such a technology loser way to look at it. Lots of great tech companies with awesome dostribution lost, like IBM had cto relationships with every single company LMAO
English
2
0
13
2.7K
BuccoCapital Bloke
BuccoCapital Bloke@buccocapital·
@fabknowledge Those people are fucking dweebs. It’s like .00000000001% of the market. Distribution matters. It will always matter. It doesn’t mean the incumbents will win, but it certainly gives them an advantage. Just look at the sentiment shift on Google in 1 year
English
5
1
186
12.5K
Fabricated Knowledge retweeted
SemiAnalysis
SemiAnalysis@SemiAnalysis_·
The Inference King has been crowned 👑 @nvidia
SemiAnalysis tweet mediaSemiAnalysis tweet mediaSemiAnalysis tweet media
English
21
37
572
45.1K
🌿 lithos
🌿 lithos@lithos_graphein·
hoping peak-litho gang didn't hear what Jensen said abt Moore's Law
English
5
1
49
7.4K
Fabricated Knowledge
Fabricated Knowledge@fabknowledge·
@dalibali2 I think it’s because attendance like 10x and there’s no smooth thing. Idk if you went to GTC pre 20k seat speaker, it was so much cuter back then
English
0
0
2
547
dalibali
dalibali@dalibali2·
@fabknowledge Agree they all suck but I just don’t think they organize it all that well.
English
1
0
1
699
dalibali
dalibali@dalibali2·
GTC is one of the lamest conferences
English
8
1
36
12K