FundaAI

1.8K posts

FundaAI banner
FundaAI

FundaAI

@FundaAI

FundaAI provides AI Invest OS, including AI Agents, equity research reports, and research data. https://t.co/0xRZegn2ep

Singapore Katılım Kasım 2012
957 Takip Edilen21.3K Takipçiler
FundaAI
FundaAI@FundaAI·
We are pleased to have accurately predicted GCP's 63% YoY growth ahead of $GOOG 's earnings — this marks the third time in a row. We turned bullish on GOOG starting from 2Q25. At that time, it was because we carefully studied AI Overview and found that AI Overview is incremental revenue for GOOG Ads. GOOG used many clever methods. For example, for high-value queries, GOOG would still insert AI Overview even after already placing three Page Ranking Ads. And for queries that originally had no Page Ranking Ads, GOOG was more aggressive in adding AI Overview. So GOOG has actually been comparing each query to decide what kind of ads to add. They need to make sure AI Overview is incremental revenue. In 3Q25, we wrote a report on GCP titled "The Path to $100Bn." We were the first to call for GCP to reach 45% growth in 2026, which was the most bullish view on the Street at the time. Honestly, I was also surprised when I built that model. But we didn't expect GOOG to achieve it as early as 4Q25. In our 1Q26 Preview, we wrote that GOOG would achieve 63% growth in 1Q26 and exceed 70% in the following three quarters. This is mainly driven by: TPU cloud GPU cloud TPU direct sales Gemini ARR — 70% of Gemini ARR will be booked into GCP, and it grew 50% QoQ.
FundaAI@FundaAI

Review| $GOOG / $MSFT / $AMZN : CapEx Raised — 2026 Finally Becomes the Year of Cloud Acceleration Q: Is 2026 the most pronounced year of CSP acceleration? A: Unequivocally yes. GPU prices began rising in 1Q26, with CPU prices following from 2Q26. The incremental growth this year is also coming from new vectors that weren’t material before — namely coding APIs (AWS-Bedrock, GCP-Vertex, Azure-AI Foundry) and ASICs (AWS-Trainium, GCP-TPU). Q: Will CapEx continue to be revised higher? A: Yes. As we discussed in our Weekly Meeting, the LTAs (long-term agreements) signed between CSPs and the memory makers took effect from March–April. This means all forward CapEx must be marked to the LTA floor price — implying a structural step-up regardless of demand. Critically, LTAs lock in floors but do not cap upside, so any further spot price appreciation will likely drive additional CapEx revisions. Q: Why did GCP and Azure come in fully in line with our expectations, while AWS missed? ...... Detailed Report fundaai.substack.com/p/review-goog-…

English
1
1
25
5.7K
FundaAI
FundaAI@FundaAI·
A very interesting $APP preview. Across the seven expert interviews conducted for this preview, we did not see any impact from META on AppLovin's gaming ads. Instead, we saw e-commerce accelerating since late February, and for the first time in nearly two years, we heard agencies describe CTV as an incremental growth driver.
FundaAI@FundaAI

Preview|APP 1Q26: No META Impact Observed; E-Commerce Reaccelerating After 1Q26 QoQ Decline; Gaming Growth Robust We recently spoke with seven AppLovin and Unity industry experts and gathered updates on several key topics -$APP Lovin quarterly channel checks -APP e-commerce progress -META’s entry into the gaming ad market and its impact on AppLovin -$U quarterly channel checks Detailed Report fundaai.substack.com/p/previewapp-1…

English
2
2
18
5.4K
FundaAI
FundaAI@FundaAI·
we raised our full-year 2027 TPU shipment forecast in our $LITE and $COHR previews.
FundaAI@FundaAI

Weekly|Cloud Acceleration Confirmed, CSP Capex Raised, DeepSeek V4 Triggers NAND Inflection, Tiktok, $LITE & $COHR , TPU CoWoS Raised, $RKLB , $PLTR , $AXTI , $MSFT Earnings season delivered the print AI infra bulls needed. GOOG/MSFT/AWS all raised CapEx alongside accelerating top-line — GCP +63%, Azure +39%, AWS likely re-accelerating in 2Q once OpenAI’s reserved Blackwell capacity flows through. CapEx up with revenue accelerating is structurally healthier than CapEx up with revenue flat, and should put data-center cash-flow concerns on the back burner for at least another quarter. DeepSeek V4 was the more underappreciated story of the week — the second round of cuts (cache hit at 1/10 of the list, stacked on 75% off) is only possible because KV cache is finally migrating from DRAM/HBM onto SSD at scale, which is a structural inflection for NAND demand. On the other hand, we have received a lot of great feedback on our recent Institutional tier launch. Here at FundaAI, we focus on our Individual Plan on Substack continue to provide highly curated research alongside community engagement on Substack Chat, while the institutional tier offers the broadest research coverage with high-touch communications. We are incredibly grateful to our long-time readers here on Substack and remain committed to delivering sharper insights and more powerful AI tools to elevate your research work. Please contact sales@funda.ai for more. This Week’s Reports Review|GOOG/MSFT/AWS: CapEx Raised — 2026 Finally Becomes the Year of Cloud Acceleration. All three hyperscalers raised CapEx with revenue accelerating, and we expect further upward revisions as LTA floor pricing flows through forward planning. fundaai.substack.com/p/review-goog-… Deep|DeepSeek V4: The Inflection Point for Large-Scale NAND-Based KV Cache. V4’s cache hit was cut to 1/10 of the list because KV cache is finally migrating from DRAM/HBM onto SSD at scale — positive for SSD, exponential for NAND. fundaai.substack.com/p/deepdeepseek… Deep|DeepSeek V4: The First Model Custom-Built for Non-NVIDIA Chips, Optimized for Cost. V4’s architecture appears co-designed for Ascend, ASICs, and optical scale-up superpods — the real start of a non-NVIDIA AI ecosystem rather than just NVIDIA-substitute compatibility. fundaai.substack.com/p/deepdeepseek… Deep|TikTok 1Q26 Update: No Visible Impact on META or GOOG Yet. Agency checks across NA / Europe / APAC: TikTok ad budgets growing, but META and YouTube have not seen the cannibalization the consensus expected — the clearest loser remains Snapchat. fundaai.substack.com/p/deep-tiktok-… Preview|MSFT 1Q26: GPU Pricing Stabilizing, Azure Begins Anthropic Partnership, but Copilot Competition Remains Intense fundaai.substack.com/p/previewmsft-… Detailed Report fundaai.substack.com/p/weeklycloud-…

English
0
0
15
3.5K
FundaAI
FundaAI@FundaAI·
Preview|APP 1Q26: No META Impact Observed; E-Commerce Reaccelerating After 1Q26 QoQ Decline; Gaming Growth Robust We recently spoke with seven AppLovin and Unity industry experts and gathered updates on several key topics -$APP Lovin quarterly channel checks -APP e-commerce progress -META’s entry into the gaming ad market and its impact on AppLovin -$U quarterly channel checks Detailed Report fundaai.substack.com/p/previewapp-1…
FundaAI tweet media
English
0
0
8
5.9K
FundaAI
FundaAI@FundaAI·
Weekly|Cloud Acceleration Confirmed, CSP Capex Raised, DeepSeek V4 Triggers NAND Inflection, Tiktok, $LITE & $COHR , TPU CoWoS Raised, $RKLB , $PLTR , $AXTI , $MSFT Earnings season delivered the print AI infra bulls needed. GOOG/MSFT/AWS all raised CapEx alongside accelerating top-line — GCP +63%, Azure +39%, AWS likely re-accelerating in 2Q once OpenAI’s reserved Blackwell capacity flows through. CapEx up with revenue accelerating is structurally healthier than CapEx up with revenue flat, and should put data-center cash-flow concerns on the back burner for at least another quarter. DeepSeek V4 was the more underappreciated story of the week — the second round of cuts (cache hit at 1/10 of the list, stacked on 75% off) is only possible because KV cache is finally migrating from DRAM/HBM onto SSD at scale, which is a structural inflection for NAND demand. On the other hand, we have received a lot of great feedback on our recent Institutional tier launch. Here at FundaAI, we focus on our Individual Plan on Substack continue to provide highly curated research alongside community engagement on Substack Chat, while the institutional tier offers the broadest research coverage with high-touch communications. We are incredibly grateful to our long-time readers here on Substack and remain committed to delivering sharper insights and more powerful AI tools to elevate your research work. Please contact sales@funda.ai for more. This Week’s Reports Review|GOOG/MSFT/AWS: CapEx Raised — 2026 Finally Becomes the Year of Cloud Acceleration. All three hyperscalers raised CapEx with revenue accelerating, and we expect further upward revisions as LTA floor pricing flows through forward planning. fundaai.substack.com/p/review-goog-… Deep|DeepSeek V4: The Inflection Point for Large-Scale NAND-Based KV Cache. V4’s cache hit was cut to 1/10 of the list because KV cache is finally migrating from DRAM/HBM onto SSD at scale — positive for SSD, exponential for NAND. fundaai.substack.com/p/deepdeepseek… Deep|DeepSeek V4: The First Model Custom-Built for Non-NVIDIA Chips, Optimized for Cost. V4’s architecture appears co-designed for Ascend, ASICs, and optical scale-up superpods — the real start of a non-NVIDIA AI ecosystem rather than just NVIDIA-substitute compatibility. fundaai.substack.com/p/deepdeepseek… Deep|TikTok 1Q26 Update: No Visible Impact on META or GOOG Yet. Agency checks across NA / Europe / APAC: TikTok ad budgets growing, but META and YouTube have not seen the cannibalization the consensus expected — the clearest loser remains Snapchat. fundaai.substack.com/p/deep-tiktok-… Preview|MSFT 1Q26: GPU Pricing Stabilizing, Azure Begins Anthropic Partnership, but Copilot Competition Remains Intense fundaai.substack.com/p/previewmsft-… Detailed Report fundaai.substack.com/p/weeklycloud-…
FundaAI tweet media
English
0
1
4
5.6K
FundaAI
FundaAI@FundaAI·
We wrote about $AXTI at the bottom. April 27.
FundaAI tweet media
English
0
0
24
9.4K
FundaAI
FundaAI@FundaAI·
Review| $GOOG / $MSFT / $AMZN : CapEx Raised — 2026 Finally Becomes the Year of Cloud Acceleration Q: Is 2026 the most pronounced year of CSP acceleration? A: Unequivocally yes. GPU prices began rising in 1Q26, with CPU prices following from 2Q26. The incremental growth this year is also coming from new vectors that weren’t material before — namely coding APIs (AWS-Bedrock, GCP-Vertex, Azure-AI Foundry) and ASICs (AWS-Trainium, GCP-TPU). Q: Will CapEx continue to be revised higher? A: Yes. As we discussed in our Weekly Meeting, the LTAs (long-term agreements) signed between CSPs and the memory makers took effect from March–April. This means all forward CapEx must be marked to the LTA floor price — implying a structural step-up regardless of demand. Critically, LTAs lock in floors but do not cap upside, so any further spot price appreciation will likely drive additional CapEx revisions. Q: Why did GCP and Azure come in fully in line with our expectations, while AWS missed? ...... Detailed Report fundaai.substack.com/p/review-goog-…
FundaAI tweet media
English
0
2
13
8.7K
FundaAI retweetledi
FundaAI
FundaAI@FundaAI·
Deep|DeepSeek V4: The Inflection Point for Large-Scale NAND-Based KV Cache In our previous article we discussed DeepSeek V4’s architectural customization on non-NVIDIA hardware and the first round of API price cuts at 75% off. This article focuses on V4’s second round of cuts: DeepSeek separately took the input cache-hit tier further down to 1/10 of list, stacked on top of the 75% off from the previous round, with the floor at ¥0.025 per million tokens. This widens the cache hit / cache miss spread from 1/12 to 1/120 (cache hit ¥0.025 vs. cache miss ¥3). DeepSeek V4’s real-world cache hit rate in agent settings has reached 95%+, and based on our research, DeepSeek’s current SSD configuration and utilization have stepped up materially versus before. Behind this is V4 compressing KV cache size to 10% of V3.2’s, plus DeepSeek’s accumulated engineering work on SSD-based KV cache, which together migrate KV cache from expensive, capacity-limited DRAM / HBM onto larger and cheaper SSD at scale. We believe DeepSeek V4’s cache-hit repricing implies upside for SSD, with NAND demand set to grow exponentially. Detailed Report fundaai.substack.com/p/deepdeepseek…
FundaAI tweet media
English
2
11
88
55.7K
FundaAI
FundaAI@FundaAI·
Funda AI new skills released today 1. Inflation Pass Through Tracker: Compare inflation pass-through rates across Consumer Staples to identify margin pressure and pricing divergence. 2. Consumer Spending Tracker: Analyze Consumer Discretionary sector trends using public government and industry spending indicators. 3. Private Label Penetration: Track private-label share trends vs. branded products to assess consumer trade-down dynamics. 4. Consumer Price Index Analysis: Break down CPI components to assess pricing power and margin dynamics across Consumer Staples. 5. Retail Earnings Analysis: Break down retail earnings comps, traffic vs. ticket, inventory turns & guidance for sector analysis. 6. Investment Thesis: Structure a comprehensive investment analysis: bull/bear cases, catalysts, risks, management quality & valuation. Please help us share with more people if you find it useful. Look forward to hearing you feedback.
FundaAI tweet media
English
0
0
6
2.6K
FundaAI retweetledi
Illiquid
Illiquid@illyquid·
You can now trade Korea on IBKR. I asked FundaAI to supplement my list of ideas.
Illiquid tweet media
English
3
9
173
21.7K
FundaAI
FundaAI@FundaAI·
We tested DeepSeek V4 in financial scenarios, and the results were impressive. Unfortunately, GPT 5.5 API is not yet available — it would certainly perform even better. In coding scenarios, our engineers reported that DeepSeek falls far behind Claude in UI Design and Web Design, likely due to insufficient multimodal training. However, in text-based coding, it is already very close to Claude 4.6. That said, there is a noticeable gap compared to Claude 4.7 and Codex 5.5. This is still consistent with the prior consensus that open-source models lag 3–6 months behind OpenAI/Anthropic
FundaAI@FundaAI

Deep|DeepSeek V4 vs Claude vs GPT-5.4: A 38-Task Benchmark Across Coding, Reasoning, and Financial Research Important note: This report is not a research report. It is an evaluation report completed by the FundaAI Engineering Team, not written by the FundaAI Analyst Team. It does not represent the views of the FundaAI Analyst Team. All test cases are based on the actual working environment of the FundaAI Platform. As of time of publication, GPT-5.5 has not yet officially released its API. Testing solely through Codex 5.5 may not fully reflect the complete performance of the API. We have currently only conducted urgent testing on DeepSeek V4, and will include GPT-5.5 test results as soon as its API becomes officially available. Key Takeaways -Claude Opus 4.6 (Thinking) and Claude Opus 4.7 tie for #1 overall (both 8.72 weighted avg). They lead for different reasons: Opus 4.6 Thinking is strongest on coding and hard reasoning, while Opus 4.7 leads writing and full-coverage multi-step work. -DeepSeek V4 Pro (Thinking) has the highest completed-task multi-step score at 8.90, ahead of Opus 4.7 at 8.87, but with partial coverage: 29/38 tasks completed because several hard coding/reasoning tasks timed out. -DeepSeek V4 Pro received the only financial research 10/10 because it produced the strongest answer to the NVDA game theory task. It did not score highly because it was long; it scored highly because it fully developed 11 players, 18 citations, and forced-move economics. -GPT-5.4 remains the fastest full-suite model (105s avg), with strong coding and reasoning, but its latest composite score is 7.88 and it no longer leads the coding table. -DeepSeek V4 has a substantially lower estimated cost than Claude in this benchmark. Flash is ~$0.007/task, Flash Thinking is ~$0.008/task, Pro is ~$0.10/task, and Pro Thinking is ~$0.15/task, all below Claude Opus per-task cost estimates. -DeepSeek V4’s relative weakness is presentation format more than analysis quality. It generally produces strong markdown research, while Claude Opus 4.5 more readily produces dashboard-ready OpenUI charts, metric cards, and data tables. -The frontier is now a three-way race between Anthropic (writing, earnings, citation rigor), DeepSeek (analytical depth, data synthesis, cost), and OpenAI (speed, incident debugging, system design). GPT-5.5 API results still need to wait until the official API becomes available. Detailed Report fundaai.substack.com/p/deepdeepseek…

English
1
4
34
11.9K
FundaAI
FundaAI@FundaAI·
Deep|DeepSeek V4 vs Claude vs GPT-5.4: A 38-Task Benchmark Across Coding, Reasoning, and Financial Research Important note: This report is not a research report. It is an evaluation report completed by the FundaAI Engineering Team, not written by the FundaAI Analyst Team. It does not represent the views of the FundaAI Analyst Team. All test cases are based on the actual working environment of the FundaAI Platform. As of time of publication, GPT-5.5 has not yet officially released its API. Testing solely through Codex 5.5 may not fully reflect the complete performance of the API. We have currently only conducted urgent testing on DeepSeek V4, and will include GPT-5.5 test results as soon as its API becomes officially available. Key Takeaways -Claude Opus 4.6 (Thinking) and Claude Opus 4.7 tie for #1 overall (both 8.72 weighted avg). They lead for different reasons: Opus 4.6 Thinking is strongest on coding and hard reasoning, while Opus 4.7 leads writing and full-coverage multi-step work. -DeepSeek V4 Pro (Thinking) has the highest completed-task multi-step score at 8.90, ahead of Opus 4.7 at 8.87, but with partial coverage: 29/38 tasks completed because several hard coding/reasoning tasks timed out. -DeepSeek V4 Pro received the only financial research 10/10 because it produced the strongest answer to the NVDA game theory task. It did not score highly because it was long; it scored highly because it fully developed 11 players, 18 citations, and forced-move economics. -GPT-5.4 remains the fastest full-suite model (105s avg), with strong coding and reasoning, but its latest composite score is 7.88 and it no longer leads the coding table. -DeepSeek V4 has a substantially lower estimated cost than Claude in this benchmark. Flash is ~$0.007/task, Flash Thinking is ~$0.008/task, Pro is ~$0.10/task, and Pro Thinking is ~$0.15/task, all below Claude Opus per-task cost estimates. -DeepSeek V4’s relative weakness is presentation format more than analysis quality. It generally produces strong markdown research, while Claude Opus 4.5 more readily produces dashboard-ready OpenUI charts, metric cards, and data tables. -The frontier is now a three-way race between Anthropic (writing, earnings, citation rigor), DeepSeek (analytical depth, data synthesis, cost), and OpenAI (speed, incident debugging, system design). GPT-5.5 API results still need to wait until the official API becomes available. Detailed Report fundaai.substack.com/p/deepdeepseek…
FundaAI tweet media
English
6
9
53
45.5K
FundaAI
FundaAI@FundaAI·
TD Cowen’s core mistake is simple: they are comparing TPU 8i to the wrong baseline. TPU 8i is not meant to replace Google’s large training superpods, nor is it a substitute for 8t. It belongs to Google’s long-standing smaller serving / inference pod track, so the right comparison is the legacy 256-chip small pod, not the 8,000–9,000+ chip training superpod. Once you compare 8i to the large training cluster and conclude “optical intensity is down,” the analysis is already off, because the baseline itself is wrong. On the right baseline, 8i actually looks much more like a major copper-to-optics transition. It is not weakening optics; it is pushing optics into the most important inter-rack, inter-group layer of the small pod architecture. The old 256-chip pod — four 64-chip racks — was all-copper. With 8i, the system scales to 1,152 chips, and inter-rack connectivity moves to all-optical. In other words, Google is not replacing its OCS-heavy large superpod path with something less optical. It is replacing a smaller, more copper-heavy pod design with an optical scale-up architecture. So the right conclusion is not that optics are going down — it is that TD Cowen is comparing the wrong thing. Put even more simply: Google’s TPU fleet used to have two scale-up models — some pods scaled up with OCS and optics, while others scaled up with copper. Now that smaller pod path is also moving to optical scale-up. That is not a negative read-through for optics. It is a positive one. $LITE
FundaAI@FundaAI

Research|TPU 8t/8i and Virgo Network: the biggest networking upgrade since TPU v4 — and another major win for optics (one we called early) Going through $GOOG ’s two recent technical posts — on Virgo Network cloud.google.com/blog/products/… and cloud.google.com/blog/products/… — what stands out is how much of it aligns with what we had already outlined months ago. Deep|AI Infra 2026: Shifting from "Brain Power" Competition to "Whole-Body" Evolution fundaai.substack.com/p/deepai-infra… Deep|LITE: Google’s Newly Introduced In-rack Switch Tray is Actually a Positive for OCS and Optics fundaai.substack.com/p/deeplite-goo… Back then, in our note on the in-rack switch tray, we tried to unpack a key misconception: this upgrade wasn’t about replacing the 3D torus. The real change was happening in the DCN — the scale-out network. We put it plainly: “The in-rack switch tray represents a change in this DCN network, not in the scale-up network.” And more importantly: “Dragonfly-topology based scale up and scale out integration, with higher network bandwidth overall.” At the time, the logic made sense to some but lacked hard confirmation. Looking at Google’s own write-ups now, that confirmation is here — and if anything, the direction is even more aggressive than expected. Detailed Report fundaai.substack.com/p/researchtpu-…

English
1
7
36
16.4K
FundaAI
FundaAI@FundaAI·
Deep| $NVDA : Rethinking NVIDIA's Moat in the AI Stack NVIDIA’s hardware leadership is virtually undisputed. In many respects, the company is not only one of the most successful enterprises of the AI era but also one of the principal infrastructure providers enabling this generational shift in compute — a position that we think deserves full respect. Over the past several years, the certainty of NVIDIA’s roadmap execution has been rare: from Blackwell to Vera Rubin, and onward to Rubin Ultra and Feynman, the cadence has held essentially at one generation per year. At the same time, NVIDIA has not confined itself to the GPU. It has continued to extend at the system level — whether by folding inference capability (such as the LPU architecture) into the platform, pushing co-packaged optics to pave the way for hyperscale clusters, or reshaping the boundary between storage and GPU memory through directions like Storage-Next. At heart, these moves reinforce its “AI factory”-grade system capability. Combined with deep entrenchment in advanced packaging capacity, NVIDIA has erected exceptionally strong moats along both the hardware and the supply-chain dimensions. However, focusing solely on hardware leadership would still understate NVIDIA’s core competitiveness. What we think deserves closer attention is its profit structure: Data Center already accounts for the overwhelming majority of revenue, the chip-level gross margin for the GPU is roughly 84%, and overall GAAP gross margin reached 75% (FY2026 Q4). In the semiconductor industry, this is almost an outlier — closer to the profitability profile of a software company than that of a conventional hardware vendor. Precisely for that reason, we believe NVIDIA’s current success is driven not by hardware alone but by its end-to-end pricing power across systems, software, and ecosystem. This, perhaps, is the key dimension worth examining more deeply and continually stress-testing going forward. ... Where the CUDA Moat Faces Pressure The timeline runs back to April 15, 2026. @dwarkesh_sp Patel released a 103-minute deep-dive interview with Jensen Huang. The questioning was highly concentrated and direct, centered on four vectors: Anthropic’s shift to TPU, the contraction of the China market, hyperscaler progress on in-house silicon, and the trend among top-tier labs of bypassing cuBLAS via Triton. On these questions, Huang’s responses followed a relatively consistent pattern: acknowledging parts of the current reality on one hand, while attributing these shifts to phase-specific or non-replicable special cases on the other. This “acknowledge + case-isolate” combination is unsurprising in itself, but it also implies that these phenomena have at minimum developed to the point where they can no longer be ignored. Notably, Michael Burry, in public commentary, summarized the response as “squirmed and obfuscated.” Whether or not one agrees with that characterization, such feedback from market participants itself reflects a fact: disagreement is widening around the stability of the CUDA moat. It is precisely against this backdrop that we tend to read the current moment as a “stress test” — the question is not whether a moat exists, but whether its boundaries and formation mechanisms are changing. In what follows, we attempt to decompose the moat across three layers. Detailed Report fundaai.substack.com/p/deepnvda-ret…
FundaAI tweet media
English
4
3
41
23.3K
FundaAI
FundaAI@FundaAI·
Review| $NOW 1Q26: Becoming CRM-ish? NOW’s print was disappointing. The updated FY26 subscription revenue guide (constant currency), excluding M&A contributions, was revised down by 25bps. Management attributed the weakness primarily to delayed Middle East on-premise orders impacted by the war, but the result nonetheless reflects the ongoing deceleration in NOW’s core organic business. More importantly from a growth-driver and reporting-transparency perspective, NOW is starting to resemble CRM from several years ago — relying increasingly on inorganic M&A to offset a slowing core business, which reduces financial statement transparency and pressures both cash flow and margins. On AI monetization, management raised the 2026 year-end “AI commit” target from $1bn to $1.5bn, and committed to providing further detail at the May 4 Investor Day. Detailed Report fundaai.substack.com/p/reviewnow-1q…
FundaAI tweet media
FundaAI@FundaAI

Preview| $NOW 1Q26: Growth Largely In Line as Macro Stays Choppy; AI Control Tower Emerges as New Narrative We spoke with three top ServiceNow (NOW) channel partners. 1Q26 performance came in largely in line with expectations — growth moderated from the 4Q25 budget-flush quarter, but full-year 2026 targets remain intact, with pipeline visibility broadly healthy. Overall, AI adoption remains soft, but we think the most notable incremental datapoint this quarter is the customer traction of AI Control Tower (AICT), which is showing up in the majority of new pipeline deals. Early anecdotes suggest that hardware pricing inflation is squeezing software budgets to some extent. Despite generally low expectations for results following the recent sell-off, we remain cautious about the positioning of application SaaS companies as LLMs become increasingly powerful. Detailed Report fundaai.substack.com/p/previewnow-1…

English
1
2
5
5.2K
FundaAI
FundaAI@FundaAI·
*SK HYNIX: BETTING ON HIGH-BANDWIDTH FLASH AS NEXT-GEN PRODUCT High IOPS - Kioxia
FundaAI@FundaAI

Deep| $Kioxia: Not a Flash in the Pan Historically, the dominant narrative in AI infrastructure investment has been GPU-centric — more compute unlocks better models with higher benchmark scores. However, the focus has now shifted from training scaling laws to agent scaling laws, where the relevant input is not training compute. Rather, agent performance on complex multi-step tasks scales with the amount of context and memory available to each agent, the number of parallel agents that can collaborate or verify each other’s work, and the number of reasoning steps the agent can take before returning a result. Agent scaling is bottlenecked by the infrastructure surrounding the GPU — you need the agent’s working memory to be accessible fast enough, cheaply enough, and at large enough scale to support long sessions. Infrastructure layers that were historically secondary — NAND storage, DRAM, CPU orchestration, and high-bandwidth interconnects — become critical determinants of system performance and cost. In our GTC preview, we observed that “NAND should not be considered a secondary topic in GTC discussions” because it is becoming a “core architectural variable”. In an even earlier post, we noted that the investment logic for DRAM and SSD increasingly resembles “AI infrastructure growth” rather than merely “cyclical beneficiaries of price upcycles.” In this note on Kioxia, we expand on these observations. And this report is the first report that @illyquid has contributed to! Detailed Report open.substack.com/pub/fundaai/p/…

English
3
2
31
7.9K
FundaAI
FundaAI@FundaAI·
If you compress all of this into one line: Google isn’t just upgrading its network — it’s shifting the entire AI system from compute-driven to bandwidth-driven. And once bandwidth becomes the constraint, optics, OCS, transceivers, and lasers move from the periphery to the core of the system. It’s a path we’ve been calling for some time. Now it’s written, clearly, in Google’s own architecture.
FundaAI@FundaAI

Research|TPU 8t/8i and Virgo Network: the biggest networking upgrade since TPU v4 — and another major win for optics (one we called early) Going through $GOOG ’s two recent technical posts — on Virgo Network cloud.google.com/blog/products/… and cloud.google.com/blog/products/… — what stands out is how much of it aligns with what we had already outlined months ago. Deep|AI Infra 2026: Shifting from "Brain Power" Competition to "Whole-Body" Evolution fundaai.substack.com/p/deepai-infra… Deep|LITE: Google’s Newly Introduced In-rack Switch Tray is Actually a Positive for OCS and Optics fundaai.substack.com/p/deeplite-goo… Back then, in our note on the in-rack switch tray, we tried to unpack a key misconception: this upgrade wasn’t about replacing the 3D torus. The real change was happening in the DCN — the scale-out network. We put it plainly: “The in-rack switch tray represents a change in this DCN network, not in the scale-up network.” And more importantly: “Dragonfly-topology based scale up and scale out integration, with higher network bandwidth overall.” At the time, the logic made sense to some but lacked hard confirmation. Looking at Google’s own write-ups now, that confirmation is here — and if anything, the direction is even more aggressive than expected. Detailed Report fundaai.substack.com/p/researchtpu-…

English
3
5
50
24.6K