bryan

2.7K posts

bryan banner
bryan

bryan

@BryzonX

Buy risk assets & chill // Fintwit’s top honesty broker // Trading journal

Katılım Kasım 2021
197 Takip Edilen882 Takipçiler
Sabitlenmiş Tweet
bryan
bryan@BryzonX·
Walk with me... $MU= Memory Shortage $AMD = CPU Shortage
bryan tweet mediabryan tweet media
English
0
1
7
3.3K
bryan
bryan@BryzonX·
@christianoboria As of the last quarter, 50% of total revenue is now coming from integrated memory Management is expecting this to grow 65%-76% by the end of the year
English
0
0
1
51
bryan
bryan@BryzonX·
Some of you may have missed this nugget from the $PENG recent earnings call 🚨 Penguin is now working on PHOTONIC MEMORY Their current KV cache currently uses high speed copper, which gets the job done now however copper can't handle the bandwidth needed for "Agentic AI" without melting or slowing down But as we move towards the agentic ai era, this requires more data, more compute, and more power The only way to scale memory in data centers is with the speed of light If you didn’t know, PENG was an early investor and engineering partner with celestial who are the pioneers in photonic fabric Penguin has been working with Celestial since its early startup days, meaning they have a 24 month head start on understanding how to cool, power, and manage photonic signals One of my fav underrated dynamics of this is Penguin will not need to dilute shareholders to fund this because since Celestial was acquired by $MRVL, Penguin has a war chest of cash from their investment to fund this So what is Penguins role in all of this? Penguin is the one building the Optical Memory Appliance (OMA) the actual physical rack that houses this photonic tech According to their April 2026 roadmap, they are moving toward a commercial launch that will redefine what "memory capacity" means Their current KV cache offers 11TB of memory per cluster, but with their photonic cache they will immediately unlock up to 1000+TB of memory Completely solving the memory problem in data centers (!) $PENG is currently sampling these with "key hyperscalers" ( likely $META and $AMZN ) with final specifications expected by early 2027 If Marvell’s Photonic Fabric becomes the industry standard (which the acquisition suggests it will), every AI data center in the world will need a Penguin built appliance to run it Management noted that their Integrated Memory segment is seeing "favorable pricing dynamics." This is a polite way of saying they are charging a premium for their early photonic and CXL expertise Penguin is leading the way to take the memory wall head on You don’t own enough $PENG
bryan tweet media
English
6
7
49
5.2K
bryan
bryan@BryzonX·
We are going to see a huge shortage of 1.6T DSP chips soon
English
6
1
87
22.5K
bryan
bryan@BryzonX·
@karts03 I’m in from $23 but I’m still holding because the numbers today are still from 800g 1.6T is ramping back half 26’ into 27’ Plus they have another product in Panther 5 otw I’m very bullish on
English
1
0
1
74
ranch
ranch@karts03·
@BryzonX thanks! just checked it out, but it already caught the hype i think, its up more 300% past few months so i think this is already priced in, or you have another thesis on the price action?
English
1
0
0
81
Unusual Intelligence
Unusual Intelligence@DataCenterMilly·
@BryzonX True true. MXL has my attention. I'm browsing the Rushmore page on their website. Thanks for your posts.
English
1
0
1
33
bryan retweetledi
bryan
bryan@BryzonX·
$MXL is a HUGE beneficiary in the adoption of HBF The infra for regular SSDs in data centers are currently not equipped to handle DRAM like speeds with NAND like capacity in which HBF provides If a CPU has to manage data reduction for a HBF drive, it spends 100% of its power just moving data, leaving zero capacity to actually run the inference model Panther 5 acts as the CPU’s assistant to offload these power intensive tasks It allows the data center to actually use the full speed of HBF allowing the CPU to provide max efficiency for the model As of TODAY, there is no other viable way to run HBF at full speed using only a CPU without Panther 5 Running HBF without Panther 5 requires buying 3x more CPUs to handle the same workload Hence why $AMD is piloting Panther 5 to have in their server racks DPU’s are powerful but very expensive and power hungry They are designed for networking, not specialized storage compression I’m telling you folks, $MXL was rerated for a reason
bryan tweet media
Jukan@jukan05

"The Next Bottleneck After HBM Is HBF"... A Computing Pioneer's Prediction "I have been consistently paying close attention to High Bandwidth Flash (HBF). I'm also collaborating with semiconductor companies on this. HBF is highly likely to stand at the center of the next bottleneck — a surge in demand." David Patterson, professor at UC Berkeley, Turing Award laureate, and widely recognized as the architect of RISC (Reduced Instruction Set Computing — an approach that simplifies instructions to improve processing efficiency), made these remarks on April 30 (local time) when he met with reporters in San Francisco immediately after delivering a keynote at the Dreamy Next event. Asked about what comes after HBM (High Bandwidth Memory), which is currently in a supply-constrained bottleneck, Professor Patterson answered that HBF will emerge as the next focus. Specifically, he said, "Although a number of technical challenges still remain, the HBF being developed by companies such as SK hynix and SanDisk is a meaningful alternative in that it can deliver large capacity with low power consumption," adding, "Going forward, how efficiently data can be stored and delivered will become the critical variable." This past March, SK hynix announced that it had joined hands with U.S. flash memory company SanDisk to drive the global standardization of HBF. Unlike HBM, which stacks DRAM, HBF is built by stacking NAND flash — a non-volatile memory. Their roles are also distinct. While HBM serves as a fast computation aid, HBF is focused on storing the vast amounts of data that AI processes at high capacity. HBF is drawing attention as the AI inference market grows. The AI market is broadly divided into learning (training) and inference. Training is the process of feeding massive amounts of data to teach an AI model. Inference is the stage in which results are derived based on the trained data. In inference AI, the ability to continuously store and retrieve vast amounts of intermediate data — such as prior conversations, judgment outcomes, and task context — is crucial. This is because AI carries out reasoning by remembering context and building upon it. The problem is that all of this data is difficult to fit into HBM. Since HBM is optimized for handling data used immediately, its capacity itself is inherently limited. Moreover, given its high price, processing the enormous amounts of context data generated during inference using HBM alone would impose significant cost burdens. As a result, an environment has formed in which both HBM and HBF are needed simultaneously — a kind of division of labor. Domestic experts in Korea also anticipate that the importance of HBF will grow going forward. At an HBF research and technology development strategy briefing held this past February, Kim Jung-ho, professor in the School of Electrical and Electronic Engineering at KAIST, stated, "If the central processing unit (CPU) was the core in the PC era and low-power technology was the core in the smartphone era, memory will be the core of the AI era," adding, "What determines speed is HBM, and what determines capacity is HBF." He further predicted, "From 2038 onward, demand for HBF will surpass that of HBM."

English
3
1
14
2.6K
John Galt
John Galt@AtlasShrug1·
@BryzonX This stock is so undervalued it’s obscene!
English
1
0
4
418
bryan
bryan@BryzonX·
@SRxTrades Agreed! This thing is about to start running
English
2
0
1
160
Blade Knight
Blade Knight@BladeKn68770654·
@aleabitoreddit Just added this decorative item to my wall. Keep the great work man! You're my inspiration 🫡.
Blade Knight tweet media
English
13
0
107
18.6K
Serenity
Serenity@aleabitoreddit·
Just in case you’re wondering why I’m so bullish on CPO. Like $SIVE (Lasers), Shunsin (Packaging), MSSCorps (Yields), Win Semi / $TSEM (Foundry). “The CPO market is projected to grow sharply by a 142% CAGR from 2026 to 2030 (excl. ELS)” “The scale-up CPO segment is projected to surpass scale-out applications before 2030 and become the dominant market. You have almost parabolic growth over the next few years. With many players like Sivers having no material exposure to previous 800g pluggable optics but are the bleeding edge leaders of CPO as the laser supplier. This is one of the best and earliest opportunities of the next optical supercycle for an architecture driven by $NVDA and $AVGO.
Serenity tweet mediaSerenity tweet media
English
96
145
1.7K
588K
ranch
ranch@karts03·
@BryzonX which company do you like here?
English
1
0
0
1.1K
bryan
bryan@BryzonX·
@rubicon59 Mass adoption isn’t expected until 2028-2030 which is an eternity in this space
English
0
0
2
958
rubicon59
rubicon59@rubicon59·
@BryzonX doesnt CPO adoption reduce dsp demand?
English
1
0
0
3.3K
bryan
bryan@BryzonX·
@NotA_Bull $AMKR I would make this a top 3 position if I didn’t miss it
English
0
0
0
170
Evan | Investments
Evan | Investments@NotA_Bull·
What is the one stock you’ve been watching for months but still haven't had the guts to buy?
English
15
0
7
2.4K
chris bee
chris bee@chrisbeeSA·
@BryzonX Still seems nothing concrete was said ?
English
1
0
0
200
bryan
bryan@BryzonX·
@marrkyss Definitely $AMD I have a large position
English
0
0
3
978
yep
yep@marrkyss·
@BryzonX Which earnings do you like the most from this week?
English
1
0
0
1.2K
bryan
bryan@BryzonX·
@Philip_pan2008 I don’t have TP’s I only sell when my thesis is no longer valid
English
0
0
1
25
chamono
chamono@chamono95741942·
@BryzonX Do you have any basis for this?
English
1
0
0
1.6K