Curly Chen

416 posts

Curly Chen banner
Curly Chen

Curly Chen

@Fangchen0105

健身,右派偏自由主义

Southern District, Hong Kong Katılım Temmuz 2025
302 Takip Edilen50 Takipçiler
Sabitlenmiş Tweet
Curly Chen
Curly Chen@Fangchen0105·
Ai agent,侧重cpu决策逻辑,调用api。 以及各Ai应用落地情况 ,包括特斯拉无人驾驶等等,AI大考年。 AMD INTC RKLB 26年底印证
Curly Chen tweet media
中文
0
0
0
618
Curly Chen retweetledi
fin
fin@fi56622380·
@yiran2037840 精品文章,非常赞同👍 用准确率 vs 参数量来估算模型大小,第一眼读到有mindblowing的感觉 OpenAI现在一级市场无人问津,anthropic 9000亿估值门庭若市,openai确实是被低估了 BTW, Opus 4.7确实是没法忍受,经常被气的吐血,我Claude这周的quota一半都没用到😅
中文
7
4
64
12.9K
Curly Chen retweetledi
情報の灯台
情報の灯台@joho_no_todai·
AMDとIntelが共同提唱、x86の行列演算を一気に16倍へ AMDとIntelが、x86向けの新たな行列演算拡張「ACE」を共同で公開した。 AVX10命令と同じく入力ベクトル2本で、1命令あたり1024回の乗算を吐き出す。 従来比16倍の演算密度を、レジスタ負荷を増やさずに引き出す設計だ。 ノートPCからデータセンターまで、同じ命令でAIが走る世界を狙う。 AMXがサーバー専用に閉じていた制約を、両社の合意で外しにかかった。 長らく競合してきた両社が、同じ仕様書に肩を並べて署名した。 この事実そのものが、x86が今どんな岐路に立っているかを示す。 joho-todai.com/amd-intel-join…
情報の灯台 tweet media
日本語
28
660
2K
187.3K
Curly Chen
Curly Chen@Fangchen0105·
Bye Powell
Curly Chen tweet media
English
0
0
0
4
Curly Chen retweetledi
Jukan
Jukan@jukan05·
Citing UBS's AMD earnings preview: - The market is clearly aware that INTC's guidance reads very favorably for AMD. - This is particularly true for server CPU, given commentary implying that INTC is undershipping the market by roughly 20%. - The key question here is AMD's supply, but our field work throughout the quarter indicated that supply of INTC parts has been far more constrained than that of AMD parts. - We therefore see a very favorable setup for AMD and expect revenue to be guided up at least $1B Q/Q to the low $11B range (vs. Street consensus of ~$10.4B). - Intel's CY2026 outlook implies its Data Center & AI segment growing ~40% Y/Y, and we now see AMD server CPU growing as much as 80% this year, with units up ~40–45% and pricing up around 20%. - From a competitive standpoint, our checks remain constructive. AMD's CPU portfolio continues to compare favorably with Intel's offerings, and the lack of meaningful timeline updates for Diamond Rapids and Coral Rapids reinforces our view that AMD should maintain a competitive advantage across the x86 ecosystem through C2026. $INTC $AMD
English
14
46
433
68.8K
Curly Chen retweetledi
Iggie🚁
Iggie🚁@Kenntnis22·
继共产主义循环之后,又出现习近平循环😃
Iggie🚁 tweet mediaIggie🚁 tweet media
中文
19
74
890
66.1K
Curly Chen retweetledi
圣光之辉
圣光之辉@SGZH99·
欧美为什么没有“粒粒皆辛苦”?因为真把人当人!
中文
165
151
1.1K
151.3K
Curly Chen retweetledi
Chris Lee
Chris Lee@ViewsOfChris·
“世界从不奖励辛苦,只奖励判断力和复利” -纳瓦尔.拉维坎特(Naval Ravikant)
中文
37
253
1.4K
50.9K
Curly Chen retweetledi
Hardik Shah
Hardik Shah@AIStockSavvy·
📢 𝐉𝐔𝐒𝐓 𝐈𝐍: $AMD announced “Advancing AI 2026,” its flagship global AI event, will be held both in-person and livestreamed from the San Francisco Moscone Center on July 23, 2026.
English
0
9
62
3.6K
Curly Chen retweetledi
Jukan
Jukan@jukan05·
I didn’t realize this before. I naturally assumed Intel would have a supply advantage over AMD in CPUs. But Qualcomm and MediaTek apparently cut their TSMC 4/5nm orders because mid- to low-end smartphone shipments literally collapsed. AMD seems to have taken over that capacity. Right now, even CPUs produced on older nodes are being bought up by customers lining up for supply. AMD’s earnings could be worth getting excited about.
English
35
67
1.2K
188.6K
Curly Chen retweetledi
Oguz Erkan
Oguz Erkan@oguzerkan·
$AMD is nearing its $NVDA moment.. Digitimes saw data center AI chip shipments at 53 million units in 2030. They estimated that 10% of this would be data center CPUs, 13% would be networking processors. This comes up to roughly 8:1 GPU-to-CPU ratio. If this balance shifts to 2:1 due to agentic AI, we'll need to ship 15 million more CPUs in 2030. At an average price of $10,000, we are looking at an additional $150 billion in CPU revenue in 2030. Assuming 75%-25% mix of x86 and $ARM, and 40% revenue share for $AMD in the x86 segment, $AMD will generate $45 billion in revenue from data center CPUs in 2030. For reference, its full-year revenue last year was just $35 billion. Couldn't be more bullish on $AMD.
Oguz Erkan tweet media
English
34
50
359
27.6K
Curly Chen retweetledi
Akshay 🚀
Akshay 🚀@akshay_pachaar·
CPU vs GPU vs TPU vs NPU vs LPU, explained visually: 5 hardware architectures power AI today. Each one makes a fundamentally different tradeoff between flexibility, parallelism, and memory access. > CPU It is built for general-purpose computing. A few powerful cores handle complex logic, branching, and system-level tasks. It has deep cache hierarchies and off-chip main memory (DRAM). It's great for operating systems, databases, and decision-heavy code, but not that great for repetitive math like matrix multiplications. > GPU Instead of a few powerful cores, GPUs spread work across thousands of smaller cores that all execute the same instruction on different data. This is why GPUs dominate AI training. The parallelism maps directly to the kind of math neural networks need. > TPU They go one step further with specialization. The core compute unit is a grid of multiply-accumulate (MAC) units where data flows through in a wave pattern. Weights enter from one side, activations from the other, and partial results propagate without going back to memory each time. The entire execution is compiler-controlled, not hardware-scheduled. Google designed TPUs specifically for neural network workloads. > NPU This is an edge-optimized variant. The architecture is built around a Neural Compute Engine packed with MAC arrays and on-chip SRAM, but instead of high-bandwidth memory (HBM), NPUs use low-power system memory. The design goal is to run inference at single-digit watt power budgets, like smartphones, wearables, and IoT devices. Apple Neural Engine and Intel's NPU follow this pattern. > LPU (Language Processing Unit) This is the newest entrant, by Groq. The architecture removes off-chip memory from the critical path entirely. All weight storage lives in on-chip SRAM. Execution is fully deterministic and compiler-scheduled, which means zero cache misses and zero runtime scheduling overhead. The tradeoff is that it provides limited memory per chip, which means you need hundreds of chips linked together to serve a single large model. But the latency advantage is real. AI compute has evolved from general-purpose flexibility (CPU) to extreme specialization (LPU). Each step trades some level of generality for efficiency. The visual below maps the internal architecture of all five side by side. 👉 Over to you: Which of these 5 have you actually worked with or deployed on?
GIF
English
66
868
3.2K
235.8K
Curly Chen retweetledi
Trade Whisperer
Trade Whisperer@TradexWhisperer·
$MU $DRAM CPU demand is going up and this is extremely bullish for Micron, Samsung & SK Hynix. $INTC CEO confirmed CPU to GPU ratio shifting from 1:8 toward 1:1. That is 800% more and it has HUGE implications for memory demand. These are NOT NVIDIA CPUs. These are standalone Xeon racks running agent orchestration, RAG pipelines, tool calling, and multi-agent coordination alongside GPU clusters. Traditional servers ran 128 to 256GB of DRAM. AI-optimized servers now ship with 512GB to 1TB or more per node. Every one is a new high-margin server DRAM demand event. Higher margin than HBM. Every server also needs significantly more SSD as model sizes and context windows grow. The demand multiplier is not 8x. It is closer to 16x or more when you stack both numbers together. Meanwhile GPU nodes simultaneously need more HBM and SOCAMM as context windows expand. Two demand vectors. One direction. Neither slowing down. Valuation Multiples Expansion & Structural Shift Incoming.
Trade Whisperer tweet media
English
17
89
488
113.9K
Curly Chen retweetledi
Sandeep Anand
Sandeep Anand@SanCompounding·
🚨 Intel says CPU-to-GPU deployment ratios have tightened from 1:8 to 1:4, and could touch 1:1 in Agentic Inferencing 📈Mizuho sees server CPU ASPs rising 10–15% through 2026 and potentially out to 2030 , with Bernstein modeling 36% YoY Xeon revenue growth in 2026 🚀Stocks to benefit from CPU Thesis: $INTC — The Turnaround Q1 26 Data Center & AI Revenues up 22% Lip-Bu Tan thesis: ✅edge + agentic inference = Xeon demand inflection. ✅Capacity reallocation from weak PC to high-margin server expands the earnings bridge. $AMD — The MarkerShare Gainer: Q425 revenue $10.27B (+34% YoY) record Data center revenue $5.38B; +269% — D.A. Davidson upgraded to Buy with PT $375; Stifel to $320 — Jefferies calls AMD the real beneficiary of the CPU boom while MI450 ramps alongside. $BABA — The cheapest CPU Play: — XuanTie C950 RISC-V CPU built for agentic inference, claiming 30%+ perf gain over mainstream products — T-Head’s PPU was China’s highest-shipping domestic GPU in 2025; spinoff/IPO catalyst pending — 10,000-Zhenwu-chip cluster live in Guangdong, expanding to 100,000 $BABA = full-stack China AI infra (Qwen + chips + cloud) $ARM — The IP royalty Hyperscaler custom silicon (Graviton, Cobalt, Axion) all run Arm — every Nvidia Grace/Vera shipment is a royalty event $DRAM — Memory is the silent CPU co-trade. Server CPU demand = DRAM demand $MU was top-3 by sector turnover on Intel earnings day, +4.28% SK Hynix profits set to double on memory pricing $AVGO — Custom ARM CPUs for hyperscalers (Google Axion partner). The “ARM-on-the-back-end” of the hyperscaler CPU shift $DELL — CPU server box winners. ASP uplift on Xeon/EPYC flows through to system pricing. $TSSI indirect beneficiary $TSM — All roads lead here. AMD Venice on N2, Arm Phoenix, Apple, Nvidia Grace all done by TSMC Fabs.
Sandeep Anand tweet media
English
0
14
63
7K
Curly Chen
Curly Chen@Fangchen0105·
$AMD $INTC $ARM Who will account for the largest proportion of the CPU market in the future?
English
0
0
0
109
Curly Chen retweetledi
Patrick Moorhead
Patrick Moorhead@PatrickMoorhead·
AMD’s in such a great position. Exceptional datacenter CPUs, the IP and SOC architectures to do about any derivative a customer wants. And this is before Helios has kicked in. $AMD
English
7
20
335
67.9K
Curly Chen retweetledi
Macro_Lin|市场观察员
Evercore ISI的Mark Lipacis把Intel从中性直接升到跑赢大盘,目标价从45拉到111。年初至今已经涨了100%,他自己也承认timing不算好,但还是选择在这个位置喊多。 他给了三条理由。第一条:AI workload越往推理和agent走,CPU的需求权重会大幅上升,CPU和GPU的比例可能从1:8翻转到8:1。第二条:Lip-Bu Tan修复了资产负债表,产品和制造都在回到正轨。第三条:地缘,Intel是美国本土唯一一家有先进制程产能的芯片制造商,跟美国政府、NVIDIA、Tesla都结了盟。 8:1这个数字非常激进。我们在上条推分析Intel电话会的时候也说过,推理集群里CPU的权重会回升,agent要做任务拆解、请求路由、上下文管理、数据库访问,这些全靠CPU。方向没问题,但从1:8直接翻到8:1,意味着agent的系统开销远远压过推理计算本身,当前的部署实践里还没有看到这个比例。从1:8走到1:4甚至1:2更现实一些。 不过Lipacis敢在Intel已经翻倍之后喊111的目标价,说明他押的是一个结构性重估,赌的是市场对CPU在AI时代角色的认知还远远没到位。 值得注意的是他没怎么提先进封装。我们之前分析认为Lip-Bu Tan把advanced packaging提到跟x86 CPU同一层的战略高度,是这次电话会最被低估的信号。Lipacis的框架里这条线缺位,如果后续封装叙事也被市场定价,Intel的重估空间可能比他给的111还要大。
中文
5
18
147
104.4K
Curly Chen retweetledi
qinbafrank
qinbafrank@qinbafrank·
来读读论文,第一次从学术的角度明确了在处理阶段cpu占总延迟的比例在大幅度提高的论文,就是这篇佐治亚理工跟英特尔合作的论文,标题《理解、分析与优化代理AI执行:以CPU为中心的视角》 论文摘要:代理型AI服务将基于大语言模型的单体推理转变为自主问题解决者,能够规划、调用工具、执行推理并动态适应环境。由于多样化的任务执行需求,此类服务严重依赖于异构的CPU–GPU系统,其中负责代理能力的大部分外部工具要么在CPU上运行,要么由CPU进行编排。为了更深入地理解其作用,本文旨在从一个很大程度上被忽视的、以CPU为中心的视角,来刻画和分析代理型AI工作负载所带来的系统瓶颈 我们首先提出了对代理型AI执行的完整时间特征刻画,并选择了具有代表性的工作负载以捕捉其算法多样性。然后,我们对这些代表性工作负载进行运行时特征刻画,在两个不同的硬件系统上分析端到端延迟和吞吐量,以分别孤立出各自的架构瓶颈。基于对瓶颈的洞察,我们最后提出了两种调度优化方法,分别称为CPU感知的重叠微批处理和混合代理调度,分别应用于同质化和异构化的代理工作负载 具体来说,这些方法旨在提高CPU-GPU并发利用率,同时减少异构执行中的资源倾斜分配。在两个硬件系统上的实验评估证明了CPU感知的重叠微批处理的有效性:在独立同质工作负载执行中,P50延迟降低高达1.7倍;在同质开环负载下,服务/总延迟降低高达3.9倍/1.8倍。此外,对于异构开环负载,混合代理调度在P50/P90分位数下可将少数请求类型的总延迟降低高达2.37倍/2.49倍
qinbafrank tweet media
qinbafrank@qinbafrank

@mylifcc arxiv.org/pdf/2511.00739 乔治亚理工跟英特尔的合写的一篇报告有研究工具处理阶段cpu占总延迟比例。第二个我也没看到相关数据,

中文
5
27
166
34.8K
Curly Chen retweetledi
Macro_Lin|市场观察员
Intel盘后大涨,这次电话会非常值得看。CEO Lip-Bu Tan讲Intel如何重新回到AI时代的牌桌上? 他把重点放在两个位置。第一,AI从训练走向推理和agent以后,CPU的价值会重新被市场看见。第二,Intel Foundry的现实突破口,可能会先出现在先进封装,不用一上来就在最先进晶圆代工上和台积电正面对撞。 过去两年,市场对AI算力的理解几乎被一个公式统治:GPU + HBM + CoWoS。大模型训练阶段确实如此。但AI不会永远停留在训练阶段。往推理、agentic AI、企业部署走,要接入数据库,处理权限、存储、网络和安全,要和各种内部工具协同,事情远比单纯的矩阵乘法复杂。GPU负责加速,CPU负责把整个系统组织起来。训练集群里CPU和GPU的比例可能是1:8,到推理集群可能变成1:4,agent越来越多、企业系统接入越来越深,CPU的权重还有可能继续上升。 所以这次DCAI收入同比增长22%,市场会特别敏感。AI capex的外溢,已经从GPU扩散到CPU服务器平台。CPU在AI时代不够性感,但可能仍然是最基础、最难绕开的资产。 第二个被低估的点,是先进封装。大家一提Intel Foundry,第一反应就是问18A行不行,14A有没有客户,良率怎么样。这些问题都重要,但先进制程代工拼的是PDK、EDA、IP、良率、产能、交期和客户信任,这些东西很难在一个季度内修复。更现实的切入口可能在别处。 我的判断,现在高端AI芯片领域最紧的瓶颈已经从先进制程晶圆扩散到整个后道。真正卡交付的是HBM、先进封装、高端载板和系统级集成能力。没有足够的2.5D/3D封装能力,前道晶圆做出来也很难变成可以交付的AI加速器模块。AI芯片的竞争,正在从"谁能设计出更强的芯片"转向"谁能把芯片大规模封出来、交出来"。 Lip-Bu Tan把advanced packaging和x86 CPU、manufacturing network放在同一层讲,我觉得是正确的。Intel的EMIB、Foveros很早就在做了,有实际量产经验。过去市场不愿意给这部分资产定价,因为大家盯着的是制程落后和Foundry烧钱。但现在AI芯片进入chiplet和异构集成阶段,封装从制造链条的尾巴变成了系统架构的一部分。一个客户的compute die在台积电做,HBM来自SK海力士,I/O die来自另一个节点,最后谁来把这些东西高效率地封在一起?这就是Intel有机会切进去的位置。 所以Intel Foundry的商业化路径可能是两条线并行。晶圆代工决定长期技术信用,18A和14A需要拿到有分量的外部客户。但先进封装可能更早成为它接触AI客户、形成收入的入口。 盘后这波涨幅,很大一部分贡献应该来自Lip-Bu Tan在电话会上的讲话,因为他讲到了市场心里。AI进入推理和agent阶段,CPU可能重新变重要;AI芯片进入异构集成阶段,先进封装可能重新变稀缺。
中文
24
55
294
51.5K
Curly Chen retweetledi
qinbafrank
qinbafrank@qinbafrank·
看陈立武在英特尔财报电话会上怎么说CPU重返AI核心: 1、半导体行业TAM接近1万亿美元,英特尔凭借x86 CPU、先进封装、庞大制造网络三大资产占据有利位置。 2、AI正从训练转向分布式推理、强化学习、Agentic AI、物理AI、机器人和边缘AI。CPU正重新确立自己作为AI时代不可或缺的基础。CPU现在充当着整个AI技术栈的编排层和关键控制平面。 3、CPU重新成为AI堆栈的“编排层和关键控制平面”:客户在部署加速器的同时部署服务器CPU,这一比例正在向CPU回升。过去CPU与GPU的比例是1比8,现在是1比4,并且正在向更好的方向发展。这不是公司一厢情愿,而是客户真实反馈。 4、加速器仍重要(例如与SambaNova的异构合作),但生产级AI计算的主干是以CPU为锚点的架构——这对x86生态和英特尔是结构性长期利好。 5、预计服务器CPU单位需求今年及明年实现双位数增长。
qinbafrank@qinbafrank

生成式AI往代理式AI迁移中,新的卡脖子环节又出现了,这次是CPU。之前市场关于算力紧缺的讨论都在GPU、HBM、光模块、电力等环节,其实对于CPU的关注比较少。其实Cpu的紧缺传了一段时间了,看最近英特尔、AMD走势最核心驱动力就是来自cpu开始出现紧缺了,甚至连过往不怎么受待见的港股联想集团,最近两周走的也很强。 1、为什agentic ai时代CPU占比会扩大? 传统AI(主要是大模型训练/推理)高度依赖GPU,因为Transformer的核心是并行矩阵运算,GPU擅长高吞吐的并行计算。这时CPU主要只负责“辅助”:数据路由、内存压缩、GPU调度等,导致数据中心CPU:GPU比例很低(典型1:4~1:8,甚至1颗CPU管8颗GPU)。CPU利用率低,基本是配角。 Agentic AI完全不同,它不是单次“问答”,而是自主多步循环(Planning → Tool Use → Act → Observe → Reflect → Iterate),涉及: 1)编排:调度子任务、多智能体协作、分支逻辑、重试机制。 2)工具调用:网页搜索、API调用、代码执行、数据库查询、向量检索(RAG)、文件处理等。 3)其他CPU密集任务:上下文管理、KV Cache处理、强化学习(RL)仿真评估、数据预/后处理。 这些任务高度串行、I/O密集、逻辑分支多,GPU并不擅长(甚至会闲置)。研究显示:工具处理阶段在CPU上可占总延迟的50%~90.6%(GPU在等待CPU)。Agentic工作流中CPU动态能耗占比可达44%,比传统AI高3~4倍。 简单说,Agentic AI把“思考”交给GPU,但把“做事/协调”交给CPU。CPU从“管家”变成了“总指挥”,必须大幅增加才能让整个系统高效运转。这就是CPU占比扩大的核心驱动(Intel、AMD、Arm、TrendForce等一致观点)。 2、CPU成为新紧缺环节的现实证据 今年Q1 Intel/AMD服务器CPU交期已经拉到6-12周,部分型号基本售罄,价格也提了10%以上。厂商自己都说“demand far exceeded expectations”。不是产能不够,而是Agentic AI把CPU从“可有可无”直接干成了“必须配足”的总指挥。 数据中心项目现在除了电力,就是CPU卡脖子最严重。传统x86(Intel/AMD)高功耗+产能紧张,供应链直接打爆。 3、CPU缺口会有多大? 行业共识是CPU:GPU比例将显著拉近,CPU需求大幅提升:从传统1:4~1:8(CPU:GPU)转向1:1~1:2(部分场景甚至1.4:1,即CPU比GPU还多)。看之前Arm估算,每GW算力需要的CPU核心从3000万激增到1.2亿(4倍增长) CPU算力份额:在Agentic工作流中,CPU承担的算力比未来机架/集群可能从“GPU主导”转向更平衡,甚至出现专用CPU rack来支撑Agentic编排;AMD/NVIDIA新一代平台已开始按1:2~1:4设计 这就带来了CPU需求的真实拐点,是实打实的硬件重构。 4、特别要说下ARM服务器CPU会更受益一些? Agentic AI最需要的就是“高核心数+低功耗+稳定串行处理”。ARM天生多核可扩展、perf/watt领先:Arm AGI CPU(136核,TDP仅300W)对比x86同规格功耗低40%+,每机架性能直接翻倍。风冷机架就能塞8000+核,液冷更能到4万+核,完美解决数据中心的“功耗墙”。 更狠的是生态大转向:AWS Graviton、Google Axion、Microsoft Cobalt早就自研ARM,云巨头集体“去x86化”。Arm 3月直接下场自研AGI CPU(首款量产芯片),Meta、OpenAI、Cerebras都是首发伙伴,OEM有联想、Supermicro。 Counterpoint预测:AI ASIC服务器CPU里,ARM份额从2025年25%干到2029年90%。Arm自己说,这波能把数据中心CPU TAM从30亿版税干到1000亿+,未来几年服务器CPU营收很可能超手机,成为最大增长极。 看下周和5月初英特尔、amd的财报电话会上,cpu实际出货量的变化、以及cpu的真实价格变化。这能说明真的有多紧缺。 5、CPU紧缺哪些公司会受益? 梳理了下哪些公司会受益,后续关注起来: 美股最核心: Intel (INTC)ntel 依然是服务器 CPU 市场的霸主。短缺潮会提升其过往型号的利润率,且其 Gaudi 与 Xeon 的组合在代理推理端有强劲需求。 AMD (AMD):理由:在 Agentic AI 服务器市场,AMD 的 EPYC 处理器因多核心优势和高性价比,目前在云厂商中的市占率持续提升,是 GPU+CPU 均衡配置趋势下的首选。 Arm Holdings (ARM):越来越多的云厂商(亚马逊、微软、谷歌)开始自研基于 ARM 架构的 CPU。无论谁赢,只要 Agent 需求推高 CPU 核心数,Arm 的授权费就会大涨。 港股(制造与分销关键点) 中芯国际 (0981):虽然其在最先进制程受限,但大量非核心逻辑控制芯片(支持 CPU 运作的辅助芯片)和中端 CPU 的需求外溢,会显著提升其产能利用率。 联想集团 (0992):全球第一大服务器与 PC 厂商。在短缺潮初期,拥有强大供应链管理能力和库存的大厂能通过提价和保证供应,抢占更多政企市场份额。 A股(国产替代与配套产业链) 海光信息 (688041):国产 x86 服务器 CPU 的龙头。在 Agentic AI 时代,由于其架构与全球生态兼容性最好,国内算力中心在补齐 CPU 短缺时,海光是第一顺位替代品。 龙芯中科 (688047):自主架构 CPU 的代表。随着国产自主可控需求增强,在党政和关键基础设施的 Agent 应用中受益。 深南电路 (002916) / 沪电股份 (002463):理由:配套受益。CPU 核心数增加和 GPU+CPU 配比调整,要求更复杂的 PCB(印制电路板)和封装基板,这些公司是全球高端服务器 PCB 的主力供应商。 澜起科技 (688008):内存接口芯片龙头。只要 CPU 多,内存条就多。Agent 时代对内存带宽要求极高,其 MRDIMM 和内存接口芯片是 CPU 性能爆发的必需品。 投资逻辑核心其实两点: 1)量价齐升:CPU 厂商(AMD, Intel, arm、海光)最直接。 2)卖铲子的人:由于 Agent 需要高带宽,内存配套(澜起)和先进封装/基板(深南)的需求甚至比 CPU 本身更稳。

中文
9
35
143
44.4K