FundaAI

1.6K posts

FundaAI banner
FundaAI

FundaAI

@FundaAI

FundaAI provides AI Invest OS, including AI Agents, equity research reports, and research data. https://t.co/0xRZegmuoR

Los Angeles, CA เข้าร่วม Kasım 2012
869 กำลังติดตาม14.3K ผู้ติดตาม
ทวีตที่ปักหมุด
FundaAI
FundaAI@FundaAI·
Deep|OFC 2026 Preview: 400G per Lane — The Next Major Inflection in Optical Interconnects We have been steadfast supporters of Optics since a year and a half ago, publishing in-depth Optics reports almost every month. Regarding this OFC and future Optics technology trends, we have a lot to discuss — especially today, when there is so much debate and speculation. The shift is not just about doubling speed from 200G → 400G per lane. It fundamentally reshapes the technology stack: modulation formats, photonic materials, and manufacturing capacity across the optical supply chain. At 200G, IMDD dominates virtually all data-center interconnects due to its simplicity, low power, and cost advantages. But at 400G/lane, physics begins to push IMDD toward very high baud rates (~226 GBaud), making signal integrity increasingly difficult beyond short-reach links. This creates a new architectural space: “Coherent Lite.”By adopting simplified coherent modulation (e.g., SP-16QAM) while avoiding the full complexity of telecom coherent systems, Coherent Lite targets the emerging 1–20 km campus / AI cluster interconnect regime. In this middle distance range, coherent-lite architectures can achieve higher spectral efficiency with lower baud rates, improving signal robustness while keeping system cost manageable. This segment barely existed in the 200G era but could become a meaningful new optical market as AI clusters scale. The shift to 400G also changes which photonic platforms matter. InP EML remains the mature solution for IMDD InP PIC becomes increasingly relevant for integrated coherent transmitters SiPh offers scalable integration but faces performance limits for very high-speed modulation TFLN emerges as a promising high-bandwidth modulator platform One underappreciated implication: InP area demand explodes.As architectures move from single-channel IMDD lasers to large InP PICs integrating multiple active components, the InP die area per optical channel can increase by two orders of magnitude, dramatically increasing wafer demand. In other words, a bandwidth upgrade at the system level may translate into a structural demand shock for the InP supply chain. As AI infrastructure scales toward ever-larger clusters, optical interconnect architecture — and the photonic materials behind it — may become one of the defining bottlenecks of the next compute cycle. $LITE $COHR $AAOI $TSEM $AXTI Detailed Report fundaai.substack.com/p/deepofc-2026…
FundaAI tweet media
English
2
6
25
198.5K
FundaAI
FundaAI@FundaAI·
We're glad that we wrote the report when $TSEM was near its bottom, and published it before Serenity and SemiAnalysis. +40% in two weeks!
FundaAI@FundaAI

Deep| $TSEM: SiPho Capacity Inflection Drives Multi-Fold Growth Cycle AI data center compute clusters are currently scaling from thousands of GPUs to tens or even hundreds of thousands of nodes. At this magnitude, traditional copper interconnects are reaching severe physical limits; once transmission rates hit 800G and above, transmission reach shortens dramatically while power consumption escalates exponentially. To bypass these constraints, Silicon Photonics (SiPho) is becoming the essential backbone for AI Networking. As of 4Q25, Tower Semiconductor’s Silicon Photonics business has emerged as the company’s primary growth engine. Revenue doubled from $106mn in 2024 to $228mn in 2025, achieving an annualized revenue run rate exceeding $360mn by the end of 2025. As the industry transitions from 400G/800G toward 1.6T, Tower has positioned itself as the lead supplier of 1.6T Silicon Photonics wafers. We believe Tower is currently the premier SiPho PIC (Photonic Integrated Circuit) foundry with a distinct competitive lead. Among major competitors, Malaysia’s Silterra lacks significant expansion capacity, while SiPho offerings from UMC, GlobalFoundries (via the AMF acquisition), and STM still trail Tower by a wide margin. The TDP (Thermal Design Power) of AI server racks, such as the Nvidia GB200 series, has jumped from 700W in the Hopper generation to over 1,200W, necessitating the adoption of liquid cooling and more efficient optical interconnects. Within these environments, SiPho facilitates higher speeds while maintaining system scalability under strict thermodynamic limits. On February 5, NVIDIA and Tower Semiconductor established a strategic partnership focused on high-speed optical interconnects for AI data centers. Tower will leverage its SiPho process platform to manufacture 1.6T-class SiPho optical engines and modules for NVIDIA’s next-generation networking architecture, optimized for NVIDIA’s specific protocols. This collaboration aims to resolve bandwidth and energy efficiency bottlenecks during the Scale-out phase of massive GPU clusters. Separately, we have highlighted the rapid progression of Optical Scale-Up, with volume production expected to commence in 2027. Delivering over 10x the optical bandwidth of traditional Scale-Out, Optical Scale-Up—whether implemented via pluggable modules, NPO, or CPO—will significantly drive demand for SiPho PICs. Alibaba’s UPN512 (a 512-xPU optical scale-up super-node) validates the migration of optics from scale-out networking into the scale-up core domain, as LPO/NPO and other near-packaged solutions achieve system-level economics. Consequently, optics is evolving from a mere bandwidth expansion tool into a foundational infrastructure component for scale-up architectures. For SiPho, this shift directly expands the long-term TAM. Scale-up environments demand higher port densities, extreme bandwidth, and stricter power budgets—requirements natively addressed by high-integration SiPho PICs and linear drive solutions. SiPho’s penetration is moving beyond “incremental replacement” to potentially becoming the default interconnect standard for next-generation AI super-nodes. Detailed Report open.substack.com/pub/fundaai/p/…

English
4
1
57
30.1K
FundaAI
FundaAI@FundaAI·
@stfbutnou @SemiAnalysis_ I think we've done in-depth research, and our clients are mainly institutional investors. So in terms of influence over retail investors, @aleabitoreddit is actually stronger than us.
English
0
0
5
868
FundaAI
FundaAI@FundaAI·
Review| $MU: Results and Guidance Crushes Expectations; Persistent Undersupply to Drive Further Memory ASP Hikes We previewed MU's earnings in our Weekly Report, and mentioned that 'We actually believe that by the third quarter of this year, we could see an $80 run-rate EPS.' . Now MU guided $20EPS. Micron’s financial results for FY2Q26 (Feb-26) crushed all expectations as revenue reached $23.9bn (vs. Street consensus of $19.2bn and buy-side consensus of $20.1bn), representing a 75% QoQ and 196% YoY increase. Non-GAAP gross margin hit a record high of 75%, and diluted EPS was $12.2, far exceeding the Street consensus of $8.6 and buy-side consensus of $10.5. Furthermore, the company generated record free cash flow of $6.9bn and the Board approved a 30% increase in the quarterly dividend to $0.15/share, signaling confidence in sustained business strength and cash generation. The outlook for the next quarter is even more remarkable, with FY3Q (May-26) revenue guidance at $33.5bn ± $750mn (vs. buy-side consensus of $26.4bn), gross margin guidance at 81.0% (vs. buy-side consensus of 74.8%), and EPS guidance of $19.15 ± $0.40 (vs. buy-side consensus of $14.81), driven by higher pricing, lower costs, and a favorable product mix. Detailed Report fundaai.substack.com/p/reviewmu-res…
FundaAI tweet media
English
1
2
27
3.8K
FundaAI
FundaAI@FundaAI·
@s86CoffeeDev @insane_analyst @insane_analyst Irrational Analysis was indeed earlier on the technical deep dive — he's definitely one of the few people who understand optics the best. But we were the first to give an EPS forecast exceeding $8 and provide a complete set of financial model assumptions.
FundaAI tweet media
English
2
1
8
1.4K
FundaAI
FundaAI@FundaAI·
Deep| $LITE: Long-Term Upside Potential as a Core Player in AI Optical Interconnects As of March 17, 2026, Lumentum (LITE) shares are up ~76% YTD with a market cap of $44.6B, following NVIDIA’s $2B strategic investment in early March and inclusion in the S&P 500 on March 23. While near-term momentum appears largely priced in, we think OFC 2026 reinforces a more important point: the long-term architecture shift toward optical scale-up remains early—and Lumentum is structurally positioned at its center. Management framed the growth thesis around four pillars: Cloud Transceivers, OCS, Scale-Out CPO, and Scale-Up CPO. All four are underpinned by the same core capabilities:: InP manufacturing capacity, OCS optical engines, and the UHP laser platform. Together, these define Lumentum’s structural moat. Lumentum’s share price already reflects market enthusiasm for AI optical interconnects, but the long-term picture presented at OFC 2026 suggests the current pricing still has room: near-term transceiver + OCS execution continues to demonstrate delivery capability, mid-term Scale-Out CPO amplifies the UHP platform’s value, and long-term Scale-Up opticalization represents a larger inflection point. Across scale-out, scale-up, and scale-across, Lumentum is the most comprehensively positioned player with the deepest foundational capabilities in AI optical networking. We have raised our forecasts for Lumentum for CY26–CY27, reflecting stronger visibility across margins and revenue driven by sustained supply tightness, improved pricing power, and accelerating adoption of next-generation optical architectures. Detailed Report open.substack.com/pub/fundaai/p/…
FundaAI tweet mediaFundaAI tweet media
FundaAI@FundaAI

Deep|OFC 2026 Preview: 400G per Lane — The Next Major Inflection in Optical Interconnects We have been steadfast supporters of Optics since a year and a half ago, publishing in-depth Optics reports almost every month. Regarding this OFC and future Optics technology trends, we have a lot to discuss — especially today, when there is so much debate and speculation. The shift is not just about doubling speed from 200G → 400G per lane. It fundamentally reshapes the technology stack: modulation formats, photonic materials, and manufacturing capacity across the optical supply chain. At 200G, IMDD dominates virtually all data-center interconnects due to its simplicity, low power, and cost advantages. But at 400G/lane, physics begins to push IMDD toward very high baud rates (~226 GBaud), making signal integrity increasingly difficult beyond short-reach links. This creates a new architectural space: “Coherent Lite.”By adopting simplified coherent modulation (e.g., SP-16QAM) while avoiding the full complexity of telecom coherent systems, Coherent Lite targets the emerging 1–20 km campus / AI cluster interconnect regime. In this middle distance range, coherent-lite architectures can achieve higher spectral efficiency with lower baud rates, improving signal robustness while keeping system cost manageable. This segment barely existed in the 200G era but could become a meaningful new optical market as AI clusters scale. The shift to 400G also changes which photonic platforms matter. InP EML remains the mature solution for IMDD InP PIC becomes increasingly relevant for integrated coherent transmitters SiPh offers scalable integration but faces performance limits for very high-speed modulation TFLN emerges as a promising high-bandwidth modulator platform One underappreciated implication: InP area demand explodes.As architectures move from single-channel IMDD lasers to large InP PICs integrating multiple active components, the InP die area per optical channel can increase by two orders of magnitude, dramatically increasing wafer demand. In other words, a bandwidth upgrade at the system level may translate into a structural demand shock for the InP supply chain. As AI infrastructure scales toward ever-larger clusters, optical interconnect architecture — and the photonic materials behind it — may become one of the defining bottlenecks of the next compute cycle. $LITE $COHR $AAOI $TSEM $AXTI Detailed Report fundaai.substack.com/p/deepofc-2026…

English
4
9
51
8.2K
FundaAI
FundaAI@FundaAI·
@Citrini7 My guess is he's pulling it from the body of the text, and each name needs to have a clear bull or bear case attached to it.
English
0
0
0
969
Citrini
Citrini@Citrini7·
We track the baskets we create in perpetuity because they’re useful both as plays and as market signals. But in the past 60 days we have released our modern warfare update & the optics piece, probably about 20 stocks in there. Tbh I didn’t really look at the methodology - we created the citrindex portal to track the performance of a wide universe of themes we created. But I think it would also be a good idea for that to be an independent thing.
English
1
0
2
1.6K
Citrini
Citrini@Citrini7·
I think an independent auditor for this kind of thing would be awesome. It’s why we made the Citrindex portal. Nobody is right 100% of the time and it’s important to see what (and what kind of market) they’re good at. Also good job GTR we’re coming for you in the next 60 days.
Lin@pyhrroll

60-Day Median Return (Long), Top 10: Global Tech Research (33 calls): +20.4% Citrini Research (19): +14.9% FundaAI (158): +11.2% SemiAnalysis (45): +8.2% BEP Research (52): +8.0% Dick Capital (44): +7.6% Irrational Analysis (41): +7.4% Fabricated Knowledge (91): +6.0% Altay Capital (55): +5.5% TMT Breakout (529): +5.1%

English
29
7
303
87.3K
FundaAI
FundaAI@FundaAI·
@Citrini7 Ok so you're number one now. But I'm a little confused why you only have 19 calls, you guys have dozens of names in every index portfolio.
English
1
0
6
1.2K
Citrini
Citrini@Citrini7·
@FundaAI Told you I was coming for them.
English
1
0
11
2K
FundaAI
FundaAI@FundaAI·
@pyhrroll Thanks for the advice, you're seriously incredible… so now I know why AI researchers crush it so hard in stocks
English
0
0
0
88
Lin
Lin@pyhrroll·
Biggest limitation: AI extraction is only as good as the writing is clear. An article might spend 2,000 words building a thesis and then conclude with "we remain cautious but see long-term potential”, is that bullish or bearish? Authors who write "We are bullish on X, target $Y" get extracted accurately. Authors who hedge or write indirectly get noisier results. Rankings are partially a function of writing clarity, not just skill. I spent a quite a bit of time tuning my prompt (attached for reference). This is Gemini 3.1 with model temp set to 0 for reproducibility.
Lin tweet media
English
2
1
3
1.9K
Lin
Lin@pyhrroll·
My previous post went viral on Reddit overnight (8.5k upvotes in total) so here is an update: I now spend $13,000/year on 31 paid Substack newsletters so you don’t have to. Here’s who actually makes money 🧵
Lin tweet media
English
10
9
82
4.4K
FundaAI
FundaAI@FundaAI·
This chart is also really interesting. When you have super high output, it inevitably means you're going to be covering some stocks on a regular basis rather than only writing up the absolute best opportunities every single time. But we're happy to be ranked number one among the high-volume Substacks. I actually think @TMTBreakout gets a bit penalized in these rankings because TMTB comments on a huge number of stocks every day, and some of those takes are probably pretty short term, which likely drags on how the rankings score him. The real value of TMTB is in how fast he helps you understand what's going on in the market.
Lin@pyhrroll

The volume question. Among authors making 100+ long calls: • FundaAI (158): +11.2% med, 70% win rate • TMT Breakout (529): +5.1%, 62% • TicToc Trading (480): +0.1%, 50% • Quality Stocks (300): +0.1%, 51% • Swiss Transparent (141): -0.1%, 50% Most high-volume authors converge toward market returns. FundaAI seems to be an outlier.

English
2
2
18
5.7K
FundaAI
FundaAI@FundaAI·
Just got to San Jose, might get to see some of you guys. I actually noticed this Smart Guy the last time he dropped his rankings, but he didn't have our data in there so I didn't repost it. Glad to see us included this time. This dude is seriously sharp. If you think about it, using Claude Code to track all the Substack and sell-side analyst track records means he's basically already running a book as a PM. He could literally rebalance position weights every quarter based on the scores. I was also pretty surprised that the AI counted nearly 160 calls from us. That's probably because we put out reports almost every day, and we don't just cover SEMI, we cover a ton of Software and Internet names too. Some of our takeaways as a Substack writer: -SEMI is hands down the best sector right now, and covering SEMI stocks definitely boosts your returns and win rate. -Our Substack is built for hedge fund analysts and PMs, so on top of SEMI we also cover Software and Internet names that could be affected by AI. Sometimes our checks point to strong numbers, sometimes they point to soft numbers, but either way the win rate in those two sectors isn't going to match SEMI. -I think what really sets us apart is that we're not just throwing out one-off narratives. We do previews, reviews, channel checks, and technology deep dives during the earnings quiet period. Our analysts genuinely hold themselves to the same bar as top-tier buy-side analysts. -Long-term conviction matters a lot, but since we update every quarter, that can weigh on our 180-day returns. That's something we need to work on improving.
Lin@pyhrroll

60-Day Median Return (Long), Top 10: Global Tech Research (33 calls): +20.4% Citrini Research (19): +14.9% FundaAI (158): +11.2% SemiAnalysis (45): +8.2% BEP Research (52): +8.0% Dick Capital (44): +7.6% Irrational Analysis (41): +7.4% Fabricated Knowledge (91): +6.0% Altay Capital (55): +5.5% TMT Breakout (529): +5.1%

English
0
0
31
8.2K
FundaAI
FundaAI@FundaAI·
@upand_right In the race for second place, $COHR holds a more commanding lead.
English
1
0
0
129
UpAndRight
UpAndRight@upand_right·
@FundaAI Let’s say the rumors are true that TFC takes the lion share of COUPE, they will surely include the FOCI/Himax stack as a redundant production line? Even if they see 25% of the COUPE volume this will be massive to their bottom line.
English
1
0
0
125
FundaAI
FundaAI@FundaAI·
Weekly|OFC Preview, Hock’s CPO View, $MU Preview, $META Layoff, FundaAI's Coding Agent in Practice, $HIMX, New Consortium, $RBRK, $APP While the market’s focus remains on crude oil prices and geopolitical tensions, this has been a week of optics for us on the research side, arguably the most debated sector in AI, with some names defying market gravity. We are very happy to see many of our good friends and publications we highly admire increasingly focusing on this area we’ve loved since we started this Substack (Shoutout to @GavinSBaker , Akram @akramsrazor, Citrini @Citrini7, Collyer Bridge @illyquid, TMTB @TMTBreakout , Serenity @aleabitoreddit, Ben Pouladian @benitoz , Jason @jasons_chips, and more). Please enjoy our relevant reports below and stay tuned for more with OFC/GTC coming next week. This week’s reports OFC 2026 Preview - Following our GTC preview, we share our view on the other major tech event next week. We believe the transition from 200G to 400G per lane is not just a speed bump — it fundamentally reshapes modulation schemes, material platforms, and the entire value chain. Read this one together with our GTC Preview from last week. fundaai.substack.com/p/deepofc-2026… fundaai.substack.com/p/deepgtc-2026… Comment on Hock Tan’s CPO View— After Hock’s comments on the AVGO earnings call rattled optics names, we published a detailed breakdown. We show what he actually said versus what the market heard. The summary: Hock argues about timing for intra-rack copper, not against optical scale-up. His “copper isn’t dead” framing reflects Broadcom’s economic position as a merchant silicon supplier. fundaai.substack.com/p/researchcomm… HIMX - where we disagree with Citrini. Citrini put out an excellent report, and we agree with most of it. Our ad was on the Himax bull case. FOCI, Himax’s strategic partner, has been outmaneuvered by Suzhou TFC in commercial execution within the CPO supply chain. This matters for anyone building a Himax position based on the CPO FAU narrative. fundaai.substack.com/p/researchhimx… RBRK review - The company delivered, with NNARR accelerating and a solid revenue/guidance beat. Key topics to watch are AI disruption risk, hardware cost impact, and the identity inflection. fundaai.substack.com/p/reviewrbrk-f… fundaai.substack.com/p/reviewrbrk-f… Detailed Report open.substack.com/pub/fundaai/p/…
FundaAI tweet media
FundaAI@FundaAI

Weekly|Unwiiiiiiiiiind, OpenAI/ $ORCL, GTC preview, $TSEM deep dive, $MDB/ $CRWD/ $ASTS earnings, CPO debate First of all, we wish our readers and everyone in the Middle East safe and sound. We never had a dull week during the MS TMT Conference, and this year is even more so, given the Citrini article and rising geopolitical risk. Unwind is the main theme of the market, from IGV vs SOXX to Korea/Japan vs Hong Kong, from value vs growth to Mag7 vs small- and mid-caps. We would not claim to be geniuses at timing the market, but this has been in the cards for a while, given the extreme positioning. On the other hand, AI development remains as strong as it is, if not getting even stronger, evidenced by Anthropic almost doubling ARR in just two months. We believe the key trend within AI remains intact, and flow should return to them as the dust settles. However, stay frosty in the near term. We may carry out a system upgrade next week in preparation for our major version update at the end of the month. If we confirm that it is necessary, we may pause Substack subscriptions for one week, during which no one’s credit card will be charged. During this period, we may reduce our content output, but we will ensure that essential preview content continues to be published. This week’s reports GTC preview - The annual party returns. We believe this round marks a strategic shift for $NVDA, from continuing to introduce faster GPUs with higher FLOPS to focusing on building a comprehensive AI factory. We discuss six areas of focus: Vera-Rubin for agentic AI, Rubin CPX for high-throughput inference, Groq LPU for low-latency inference, Feynman, CPO/photonics, and an AI-native storage hierarchy. fundaai.substack.com/p/deepgtc-2026… TSEM deep dive - Our latest coverage focuses on another key player within the optics supply chain. Specifically, TSEM is transforming into a silicon photonics (SiPho) specialty foundry, poised to benefit massively from the rising adoption of this technology. In this report, we detail the technology architecture, capacity expansion, and competitive landscape, along with financial estimates. fundaai.substack.com/p/deeptsem-sip… MDB and CRWD review - Both names were seen as relatively safer from AI disruption during the sell-off (less so for CRWD lately). MDB disappointed bulls with an Atlas expectation miss, which we believe reflects the GTM strategy change under the new CEO. On the other hand, CRWD delivered solid results, alleviating concerns. In general, both names are still in the more expensive camp, which remains a key overhang in this tape. fundaai.substack.com/p/deeptsem-sip… fundaai.substack.com/p/reviewcrwd-f… fundaai.substack.com/p/previewcrwd-… ASTS review - first-ever fiscal report featuring substantial revenue. This marks the company’s official transition from a pure concept stock to an early-stage commercial company with tangible revenue—despite remaining in a significant loss phase. As the SpaceX IPO approaches, we believe the broader space sector will see growing investor interest. fundaai.substack.com/p/reviewasts-4… Detailed Report open.substack.com/pub/fundaai/p/…

English
2
1
15
5.7K
FundaAI
FundaAI@FundaAI·
Our most-read report since March has been $TSEM. This month, we also published very lengthy GTC Preview and OFC Preview pieces, which we think reflects where our readers' interests lie. In the two weeks following the release of our TSEM report, we saw friends on Substack and Twitter begin publishing their own reports on the topic one after another — that was truly exciting to see.
FundaAI@FundaAI

Deep| $TSEM: SiPho Capacity Inflection Drives Multi-Fold Growth Cycle AI data center compute clusters are currently scaling from thousands of GPUs to tens or even hundreds of thousands of nodes. At this magnitude, traditional copper interconnects are reaching severe physical limits; once transmission rates hit 800G and above, transmission reach shortens dramatically while power consumption escalates exponentially. To bypass these constraints, Silicon Photonics (SiPho) is becoming the essential backbone for AI Networking. As of 4Q25, Tower Semiconductor’s Silicon Photonics business has emerged as the company’s primary growth engine. Revenue doubled from $106mn in 2024 to $228mn in 2025, achieving an annualized revenue run rate exceeding $360mn by the end of 2025. As the industry transitions from 400G/800G toward 1.6T, Tower has positioned itself as the lead supplier of 1.6T Silicon Photonics wafers. We believe Tower is currently the premier SiPho PIC (Photonic Integrated Circuit) foundry with a distinct competitive lead. Among major competitors, Malaysia’s Silterra lacks significant expansion capacity, while SiPho offerings from UMC, GlobalFoundries (via the AMF acquisition), and STM still trail Tower by a wide margin. The TDP (Thermal Design Power) of AI server racks, such as the Nvidia GB200 series, has jumped from 700W in the Hopper generation to over 1,200W, necessitating the adoption of liquid cooling and more efficient optical interconnects. Within these environments, SiPho facilitates higher speeds while maintaining system scalability under strict thermodynamic limits. On February 5, NVIDIA and Tower Semiconductor established a strategic partnership focused on high-speed optical interconnects for AI data centers. Tower will leverage its SiPho process platform to manufacture 1.6T-class SiPho optical engines and modules for NVIDIA’s next-generation networking architecture, optimized for NVIDIA’s specific protocols. This collaboration aims to resolve bandwidth and energy efficiency bottlenecks during the Scale-out phase of massive GPU clusters. Separately, we have highlighted the rapid progression of Optical Scale-Up, with volume production expected to commence in 2027. Delivering over 10x the optical bandwidth of traditional Scale-Out, Optical Scale-Up—whether implemented via pluggable modules, NPO, or CPO—will significantly drive demand for SiPho PICs. Alibaba’s UPN512 (a 512-xPU optical scale-up super-node) validates the migration of optics from scale-out networking into the scale-up core domain, as LPO/NPO and other near-packaged solutions achieve system-level economics. Consequently, optics is evolving from a mere bandwidth expansion tool into a foundational infrastructure component for scale-up architectures. For SiPho, this shift directly expands the long-term TAM. Scale-up environments demand higher port densities, extreme bandwidth, and stricter power budgets—requirements natively addressed by high-integration SiPho PICs and linear drive solutions. SiPho’s penetration is moving beyond “incremental replacement” to potentially becoming the default interconnect standard for next-generation AI super-nodes. Detailed Report open.substack.com/pub/fundaai/p/…

English
2
3
41
10.3K
FundaAI
FundaAI@FundaAI·
@QihongF44102 I remember you were very bullish about AAOI last year. Why have you changed your view? Although we might be on the same page, I'd still like to hear your reasoning.
English
0
0
0
551
AI产业挖掘🐔
AI产业挖掘🐔@QihongF44102·
Never believe what they say — look at how they actually execute. According to our investigation, AWS has not shipped a single 800G optical module so far, and the 1.6T optical module is even more of a pipe dream. This company is always telling stories, and then doing ATM share offerings. $AAOI
KawzInvests 🦑@KawzInvests

$AAOI I have been seeing a lot of bearish takes on this name recently. I am still long. The stock is down 26% from its all-time high of $127. The bear case has gotten louder. Here is why I think the bears are missing the asymmetry. At $94, AAOI trades at roughly 8x FY2026 consensus revenue of $856M. Management guided above $1B, which brings that multiple down to around 7x. LITE trades at approximately 14.5x forward revenue. COHR trades at roughly 7x. AAOI is growing faster than both. Now look at what half-execution on 2027 targets actually means: Management said monthly transceiver revenue reaches $378M by mid-2027. That annualizes to $4.5B. At current prices you are paying roughly 1.6x that run rate. If they deliver half of that you are still paying well under 3x revenue for a company growing triple digits with an in-house InP fab. That does not stay unpriced. The Q4 results were real. Revenue of $134M, up 33.9% year over year, beating estimates. EPS of -$0.01 vs consensus of -$0.12. 1.6T volume order secured from a major hyperscale customer. Shipments starting Q3 2026. Three hyperscale customers each expected to exceed 10% of FY2026 revenue. The single customer concentration risk the bears have cited for years is structurally gone. The company is investing $300M to triple InP laser capacity in Texas by mid-2027. They are presenting at OFC on March 17th. Jensen speaks Monday. The bear case requires management to miss badly on every single target simultaneously. The bull case only requires them to be half right. Full breakdown in the report. Link in bio.

English
2
1
12
5.7K
FundaAI
FundaAI@FundaAI·
Today, Apple Store China reduced its take rate from 30% to 25%. Although the magnitude of this cut is far smaller than those made in the US and Japan last December, it is still an important step. In practice, major Chinese gaming companies such as NetEase have long circumvented the 30% take rate by adopting direct payment methods through SMS billing and key account services. We believe the take rate in China will continue to decline further, followed by Europe. Europe was the earliest to permit sideloading and has been the most proactive in constraining the Apple Store's take rate, making it highly likely to adopt further measures going forward.
FundaAI@FundaAI

Preview| $APP& $U 4Q25: 2026 Is the Year of Gaming We began covering APP in 4Q24, focusing on AppLovin’s e-commerce opportunity and actively addressing short reports in 1Q25. fundaai.substack.com/p/previewapp-1… fundaai.substack.com/p/longapp-furt… fundaai.substack.com/p/deepapp-how-… In 2Q25, we were cautious on revenue and e-commerce. Shares initially dropped 10% after hours, but quickly rebounded on Adam’s positive e-commerce outlook. fundaai.substack.com/p/previewapp-2… In 3Q25, we remain cautious on e-commerce. fundaai.substack.com/p/previewappun… During Black Friday, we published a report highlighting AppLovin’s significant improvement. However, in our Substack Chat posts throughout the following month, we consistently noted that while e-commerce ads remained weak, gaming ads stayed very strong. fundaai.substack.com/p/deepapp-blac… We swiftly refuted Capital Watch’s recent short attack. As of yesterday, most claims have been retracted. We also covered Unity concurrently. In 3Q25 earnings, we noted that Unity experienced an SDK bug that had a short-term impact, which was quickly fixed and led to a reacceleration. We continue to see accelerating growth in 4Q25 and 1Q26. In our early December outlook on AI for this year, we presented an interesting perspective: 2026 is the Year of Enterprise for AI. Since that report, the narrative has been dominated by OpenAI and Anthropic entering the enterprise space and displacing SaaS. Today, we want to discuss a new trend: The Year of Gaming. Detailed Report open.substack.com/pub/fundaai/p/…

English
0
0
7
3.9K
FundaAI
FundaAI@FundaAI·
The newly announced Optical Scale-up Consortium (OCI) may look like a routine standards announcement, but the implications could be much bigger. The roadmap for the consortium is already circulating in the industry and will likely surface more visibly around GTC. If NVIDIA, AMD, Broadcom, Microsoft, Meta and OpenAI are jointly defining an optical scale-up standard, that alone is a strong signal about where AI infrastructure is heading. Based on supply chain checks, we believe the architecture is likely to center around ELS + fiber + SiPho as the physical layer, with CPO, NPO, and on-board optics emerging as potential form factors. This direction effectively acknowledges a constraint the industry has been approaching for some time: copper is increasingly running out of headroom for next-generation scale-up fabrics. Once scale-up moves to optics, the traditional interconnect debates—NVLink vs UALink vs other proprietary fabrics—may become less central. The physical layer begins to converge around optics, with differentiation shifting higher in the stack. One interesting detail is that Broadcom is a founding member of the optical scale-up consortium, something that would have been surprising not long ago given its deep positioning in electrical interconnect ecosystems. Until now, optics has largely been a scale-out story. We believe optical scale-up represents the next structural shift, where optics moves closer to the compute complex and becomes part of the core AI backplane. If that transition materializes, the optical content per AI system could expand dramatically. Link businesswire.com/news/home/2026…
FundaAI tweet media
English
3
10
46
9.6K