bryan

2.6K posts

bryan banner
bryan

bryan

@BryzonX

Buy risk assets & chill // Fintwit’s top honesty broker // Trading journal

Se unió Kasım 2021
197 Siguiendo786 Seguidores
Tweet fijado
bryan
bryan@BryzonX·
Walk with me... $MU= Memory Shortage $AMD = CPU Shortage
bryan tweet mediabryan tweet media
English
0
0
6
2.8K
bryan
bryan@BryzonX·
@Philip_pan2008 I don’t have TP’s I only sell when my thesis is no longer valid
English
0
0
0
4
chamono
chamono@chamono95741942·
@BryzonX Do you have any basis for this?
English
1
0
0
219
bryan
bryan@BryzonX·
We are going to see a huge shortage of 1.6T DSP chips soon
English
3
1
23
2.3K
bryan
bryan@BryzonX·
Some of you may have missed this nugget from the $PENG recent earnings call 🚨 Penguin is now working on PHOTONIC MEMORY Their current KV cache currently uses high speed copper, which gets the job done now however copper can't handle the bandwidth needed for "Agentic AI" without melting or slowing down But as we move towards the agentic ai era, this requires more data, more compute, and more power The only way to scale memory in data centers is with the speed of light If you didn’t know, PENG was an early investor and engineering partner with celestial who are the pioneers in photonic fabric Penguin has been working with Celestial since its early startup days, meaning they have a 24 month head start on understanding how to cool, power, and manage photonic signals One of my fav underrated dynamics of this is Penguin will not need to dilute shareholders to fund this because since Celestial was acquired by $MRVL, Penguin has a war chest of cash from their investment to fund this So what is Penguins role in all of this? Penguin is the one building the Optical Memory Appliance (OMA) the actual physical rack that houses this photonic tech According to their April 2026 roadmap, they are moving toward a commercial launch that will redefine what "memory capacity" means Their current KV cache offers 11TB of memory per cluster, but with their photonic cache they will immediately unlock up to 1000+TB of memory Completely solving the memory problem in data centers (!) $PENG is currently sampling these with "key hyperscalers" ( likely $META and $AMZN ) with final specifications expected by early 2027 If Marvell’s Photonic Fabric becomes the industry standard (which the acquisition suggests it will), every AI data center in the world will need a Penguin built appliance to run it Management noted that their Integrated Memory segment is seeing "favorable pricing dynamics." This is a polite way of saying they are charging a premium for their early photonic and CXL expertise Penguin is leading the way to take the memory wall head on You don’t own enough $PENG
bryan tweet media
English
1
1
12
700
bryan retuiteado
Puru Saxena
Puru Saxena@saxena_puru·
AI inference demand to surge over the next 4 years By 2030, inference will surpass training to become the dominant workload in AI data centres, representing more than half of all AI compute. Our portfolio is well positioned for this opportunity. alphatarget.com
Puru Saxena tweet media
English
15
22
159
36.3K
bryan
bryan@BryzonX·
@ParadisLabs Moving averages & support/resistance is all you need Everything else is fairy dust
English
2
0
1
76
Paradis Labs
Paradis Labs@ParadisLabs·
Technical analysis is literally just a load of fairy dust sprinkled on top of delusion: - Technical indicators analyse historic price action. - But stock prices reflect forward information/fundamentals... A simple analogy is that: TA shows you yesterday's weather report. But what anyone should care about is whether it's sunny/raining tomorrow. The market's pricing in expected earnings six months from now, product launches, interest rate cuts etc. It's not reacting to some random squiggly lines you drew on your TradingView chart. Literally the only "technical indicator" I'll look at from time-to-time is trading volume to gauge sentiment.
Paradis Labs tweet media
10xHustler@Prime100Op29106

@ParadisLabs How does one buy when its miles away from 200ema amd 50ema?

English
79
10
243
52.2K
bryan
bryan@BryzonX·
@spellcrypto This is most definitely good It helps with institutional exposure and a bit of a stamp of approval for a bit The rebalance is a one off
English
0
0
0
141
spellcrypto
spellcrypto@spellcrypto·
@BryzonX That’s not good Means they have rebalance
English
1
0
0
185
bryan
bryan@BryzonX·
Did you know $MXL is the top holding in Invesco’s Semiconductor ETF $PSI ? A $6B company over the likes of the giants $AMD , $AVGO , $NVDA , and $MU 🤯
bryan tweet mediabryan tweet media
English
2
1
48
9.3K
bryan
bryan@BryzonX·
@Neon68 It is up a lot these past 2 weeks, however this just means that the move is justified Also means they have a large runway of growth ahead Especially considering this is only one of their products They still have Panther 5 which will also be scaling fast over the coming years
English
0
0
2
236
Neon68
Neon68@Neon68·
@BryzonX You mean this is something big who noones see it right now? and its starts now? so something like "all in"?
English
1
0
1
316
bryan
bryan@BryzonX·
Wow… so this is huge $MXL has locked up the only guaranteed 1.6T silicon supply for the next 18 months Rushmore 1.6T DSP is made using the 4nm node If a Hyperscaler wants 1.6T optics for a cluster they are building in Q1 27, and they haven't already secured a spot in the TSMC 2 year backlog, they literally cannot get chips from Broadcom or Marvell Especially from Broadcom as their chips require CoWoS which is even more backlogged This is a great indicator that the demand from hyperscalers is accelerating as they are essentially locking up MaxLinear’s capacity because it is the only guaranteed supply of 1.6T silicon available until late 2028 MXL is sitting on a pre paid, high yield production line at Samsung Management has stated they already secured $210M in long term purchase commitments specifically for wafer supply and assembly services They have also been aggressively building die bank inventory, in which inventory levels grew slightly this quarter You wouldn’t be hoarding supply if the demand wasn’t there This company is only guiding for $150M in optical revenue for 26’ Data center revenue will start ramping up in back half 26’ Management is essentially guiding as if the growth stops in July, which contradicts their own statement about "accelerating ramps through 2027” I’m starting to think this is a MAJOR sandbag from management 😳
Jukan@jukan05

Samsung Foundry's 4nm "Fully Booked" Through Next Year… Set for H2 Profitability Turnaround Samsung Electronics' foundry flagship 4nm line has entered "fully booked" status through next year. The result reflects the convergence of HBM4 volume ramp and orders from global Big Tech. Industry observers expect the chronically loss-making foundry division to fire its first signal of recovery as early as the second half of this year. According to the semiconductor industry on the 3rd, Samsung Electronics' 4nm foundry process has recently secured order volumes extending into next year's production. A semiconductor industry source who requested anonymity said, "The 4nm process has recently demonstrated better-than-expected stability among global customers, and demand is exploding," adding, "The line is running so tightly that it is effectively impossible to take additional orders through next year." The core driver of these orders is HBM4. Samsung Electronics produces the base die mounted in its HBM4 on the 4nm foundry process. As the company begins full-scale HBM4 supply to AI accelerator vendors such as NVIDIA and AMD, the utilization rate of the supporting 4nm foundry line has likewise reached its ceiling. Demand for the 4nm process is not confined to memory. Global fabless companies that previously relied on TSMC are now knocking on Samsung's door, factoring in supply-chain diversification and cost-effectiveness. NVIDIA and Google currently appear on the customer roster for Samsung's 4nm node. With improved yields and verified power efficiency (performance per watt), the "love calls" from Big Tech continue to come in. With high-value-added HBM4 and global Big Tech volumes filling the 4nm line, expectations for an earnings turnaround are also rising. The 4nm process has already completed its large-scale investment phase, easing the depreciation burden. The structure is one in which profitability rises sharply as utilization is maximized. A source at a Samsung Foundry partner company assessed, "Thanks to the stabilization of the 4nm process and the strong demand anchor of HBM4, Samsung Foundry is expected to swing to profit as early as the second half of this year, or at the latest in the first half of next year," adding, "After a prolonged slump, it has clearly entered a recovery phase." That said, the industry points to the new fab under construction in Taylor, Texas, as the biggest variable for future earnings. The substantial initial operating costs and labor expenses incurred during fab completion and ramp-up preparation could swing reported earnings depending on how they are accounted for on the books. Another semiconductor industry source said, "Since the Taylor fab currently being built sits on U.S. soil, whether the related performance is booked under the U.S. subsidiary (DSA) or consolidated into the domestic foundry results remains to be seen."

English
2
11
110
25.4K
bryan
bryan@BryzonX·
@chamono95741942 Non issue, the CEO bought $1.4M or 334k shares in March.
English
3
0
0
354
chamono
chamono@chamono95741942·
@BryzonX Why did they sell the stock? "The proposed sale shows the sale of 15,050 shares of MXL stock (MXL)."
English
1
0
0
404
bryan
bryan@BryzonX·
Oh boy I have a new ticker I will be covering soon New thematic sector that is being massively overlooked but going through a massive investment cycle Once again, we will be early
English
1
0
22
1.1K
bryan retuiteado
Vikram Sekar
Vikram Sekar@vikramskr·
Deepseek v4's DualPath makes SSD the center of KV cache storage and drives inference cost massively down. This is something I called out in a Jan 2026 post, and is worth revisiting. The main problem for long context and agentic workloads has always been where to store KV cache. - HBM just doesnt have enough capacity - DRAM is better but not great cost / bit Both are massively supply constrained. The only choice left is SSDs. Nvidia's context storage systems like CTX allow massive amounts of KV cache storage on SSDs, but the latency is much higher - somewhere between local SSDs and networked storage. Dualpath overcomes this by not saturating the NIC on the prefill stage by providing KV cache directly to the decode, which then transfers it to the prefill stage via RDMA. This increases inference throughput by nearly 2x. So what does all this have to do with token costs for inference? It allows the cost of cache hits (the case where the KV cache needed is easily available, and does not need recompuation) to ZERO. Deepseek v4 is proof of this. Just look at the cost per cache hit go to near zero... and Deepseek inference often hits north of 95% in terms of cache hits. This makes inference really cheap, and highly reliant on NAND SSDs. There is another caveat. Due to Chinese labs being cut off from American silicon, there is a growing divide that allows Chinese models to only run on Chinese silicon. This is a problem as people like Jensen and Gavin Baker have pointed out. At such low inference costs, architectural innovations on domestic hardware allow Chinese labs to pull way ahead of American labs. Link to the full Substack post: open.substack.com/pub/viksnewsle…
Vikram Sekar tweet media
English
2
15
97
9.7K