RJC

5.9K posts

RJC banner
RJC

RJC

@RJCcapital

monitoring the situation

New York, USA Katılım Ocak 2025
382 Takip Edilen23.7K Takipçiler
Sabitlenmiş Tweet
RJC
RJC@RJCcapital·
Stop losses feel safe because they’re simple. But in a concentrated portfolio, they often force you to sell at the worst time, when volatility and fear are highest. If you’re going to be concentrated, the real discipline is thinking clearly not mechanical exits. open.substack.com/pub/rjccapital…
RJC tweet media
English
14
15
172
37.8K
RJC
RJC@RJCcapital·
Hmmm
RJC tweet media
2
0
40
4.6K
RJC
RJC@RJCcapital·
$MU fwd pe down to 5x
English
17
3
245
23.3K
RJC
RJC@RJCcapital·
$MU Q2 FY2026 EARNINGS CALL SUMMARY: CEO Sanjay Mehrotra opened the call by stating that AI has fundamentally recast memory as a defining "strategic asset," not just a commodity input. He noted that memory enables longer context windows, deeper reasoning chains, and multi-agent orchestration, and that as AI evolves, compute architectures will become increasingly memory-intensive. Micron is now executing Strategic Customer Agreements (SCAs) with multiyear volume and price commitments, replacing prior LTAs and providing significantly improved revenue visibility. On HBM, Mehrotra confirmed that volume shipments of 36GB 12-high HBM4 have commenced for NVIDIA's Vera Rubin platform, with the company expecting to reach mature yields faster than HBM3E. The 16-high HBM4 stack has been sampled at 48GB per stack, a 33% capacity increase, and HBM4E development is underway on Micron's 1-gamma DRAM node with volume ramp targeted for calendar 2027. Management also noted that Micron has achieved its HBM market share target in line with overall DRAM share and will no longer break out HBM share on a quarterly basis going forward. CFO Mark Murphy pushed back on the idea that margins should revert to historical averages, describing AI as a "transformational secular driver" that requires a rethinking of prior cyclical assumptions. He outlined the self-reinforcing demand loop: more memory drives down token cost, which accelerates AI deployment, which generates more tokens and agents, which in turn requires more memory. Murphy added that margins now reflect recognition that memory is significantly more valuable as an efficient way to monetize AI across both data center and edge. On the supply side, management described both DRAM and NAND as structurally constrained, driven by low inventory, declining bits-per-wafer on node transitions, rising HBM trade ratios, and long greenfield lead times, with tightness expected to persist beyond calendar 2026. Data center's share of DRAM and NAND TAM is expected to exceed 50% of total industry for the first time in calendar 2026. Micron also highlighted emerging demand vectors including robotics, which management described as a potential 20-year growth opportunity with memory requirements comparable to L4 autonomous vehicles, and LP SOCAMM for data centers, where Micron sampled the industry's first 256GB product enabling 4x the memory content per CPU versus a year ago.
RJC tweet mediaRJC tweet mediaRJC tweet media
English
2
6
71
7.5K
RJC
RJC@RJCcapital·
@Davidson_august Please DM me your discord username and email and I will fix it
English
0
0
0
272
Augustus Crypto
Augustus Crypto@Davidson_august·
@RJCcapital Woman I paid for you subscription I demand you put me into your discord !
English
1
0
0
289
RJC
RJC@RJCcapital·
Did you miss out on the biggest memory Supercycle in history? The members in our Substack and discord took full advantage with $MU and $SNDK You should have listened nephew
RJC tweet media
English
9
1
99
10.5K
RJC
RJC@RJCcapital·
Link to my analysis of $SNDK and $MU from back in November: @rjccapitalgroup/note/p-180153158?utm_source=notes-share-action&r=6a9tbj" target="_blank" rel="nofollow noopener">substack.com/@rjccapitalgro
English
0
0
1
2.1K
RJC
RJC@RJCcapital·
$MU RAISES CAPEX ABOVE $25B, ALLOCATED TOWARD FACTORY BUILDOUT TO SUPPORT AI MEMORY DEMAND - BLOOMBERG Micron guided to capital expenditures exceeding $25B this fiscal year, reflecting accelerated investment in factory capacity to support surging demand for AI-driven memory products. The spending increase is tied to high-bandwidth memory used in AI training and inference, where supply constraints and elevated pricing are driving a shift toward higher-margin production. For the most recent quarter, revenue reached $23.9B with earnings of $12.20 per share, both above expectations, reflecting strong pricing and tight supply conditions across memory markets. Recent performance underscores the strength of the cycle, with prior-quarter revenue nearly tripling year over year amid continued hyperscale data center expansion. Management also signaled a further step-up in 2027 capital spending, indicating sustained demand visibility but reinforcing the capital-intensive nature of scaling advanced memory production. bloomberg.com/news/articles/…
RJC tweet media
English
6
3
49
6.6K
RJC
RJC@RJCcapital·
If you found this thread useful, I publish deeper analysis and earnings breakdowns on my Substack. Link to Substack: @rjccapitalgroup" target="_blank" rel="nofollow noopener">substack.com/@rjccapitalgro
English
0
1
5
2.4K
RJC
RJC@RJCcapital·
5. Free Cash Flow (Quarterly) From burning $1.9B/quarter to generating $3B+ in FCF, even after $4.5B in capex. The cash flow inflection is funding Micron's US fab buildout without heavy dilution or excessive leverage.
RJC tweet media
English
1
0
6
1.9K
RJC
RJC@RJCcapital·
$MU reports earnings today after close. Here are 5 charts I am watching closely headed into earnings
RJC tweet media
English
4
1
60
7.6K
RJC
RJC@RJCcapital·
$LITE Key Management Comments From Nokia Strategic AI Networking Conference Management emphasized that demand across its optical portfolio remains exceptionally strong, with the company effectively sold out through calendar 2027 even as it brings on incremental capacity. Importantly, this demand is not characterized as a short term pull forward or order seeding dynamic, but rather a structural step up tied to AI infrastructure buildouts and optical scaling requirements. The company noted it is still under shipping relative to demand, suggesting continued tightness and visibility into sustained growth as deployments expand from initial scale out into intra rack and rack level architectures. From a growth perspective, management pointed to meaningful step ups in content as architectures evolve. The transition from early scale out deployments to scale up within the rack is expected to drive a 3 to 4 times increase in opportunity, with further expansion as lane counts increase materially inside the rack. This dynamic supports a multi year demand cycle where both unit volumes and content per system increase simultaneously, reinforcing confidence in continued revenue acceleration into 2027. On profitability, Lumentum outlined a clear path to approximately 40 percent non GAAP operating margins at a 2 billion dollar quarterly revenue run rate. This is positioned as operating margin, not EBITDA, underscoring the degree of operating leverage embedded in the model. Margin expansion is expected to be driven by scale efficiencies, improved mix toward higher value indium phosphide based products, and better absorption of fixed costs as capacity ramps. In terms of near term milestones, management indicated it expects to reach a 1.25 billion dollar quarterly revenue level within 9 to 12 months, followed by progression to a 2 billion dollar quarterly run rate over the subsequent 9 to 12 months, subject to typical supply chain variability. Notably, these targets are framed conservatively relative to demand, with internal execution focused on exceeding the low end of these ranges. Critically, management highlighted that its multi billion dollar agreement with NVIDIA, as well as the associated Greensboro indium phosphide fab required to support that program, are not included in the current financial framework. As a result, both the incremental revenue contribution and capacity expansion tied to this program represent upside beyond the company’s communicated 18 to 24 month outlook, reinforcing the potential for estimates to move higher as visibility improves.
RJC tweet mediaRJC tweet mediaRJC tweet media
English
1
5
67
7.5K
RJC
RJC@RJCcapital·
Good write up
KawzInvests 🦑@KawzInvests

Building a memory fab takes 4 years. Building a photonics fab takes 9 months. There is a MASSIVE difference between the build out for Photonics vs Memory A memory fab is a precision lithography operation. You are packing billions of transistors at single-digit nanometer nodes. EUV tools alone take 12-18 months to procure and calibrate. The yield ramp after that takes years. The bottleneck is physics and it cannot be compressed. A photonics fab is an INTEGRATION PROBLEM. You are building devices that manipulate light, not electrons. Indium phosphide. Optical waveguides. Alignment tolerances measured in nanometers of coupling efficiency, not transistor density. No EUV required. The practical timeline difference: Samsung Electronics, $MU, SK Hynix 3 to 5 years from groundbreak to meaningful output. The lithography learning curve is non-negotiable. $AAOI with an existing warehouse 9 months. Not because construction is faster. Because they skip the construction problem entirely. Cleanroom retrofit, tool installation, and process bring-up run in parallel. Most companies do these sequentially. AOI does not. $AAOI has a massive automation advantage AOI runs internal testing systems at 20x the throughput of standard industry equipment. Their product platforms are standardized to the point where each new production line is not a new engineering problem it is a deployment. When they enter a new facility, they are not figuring out the process. They are executing a template they have already optimized across years of production in Taiwan. That is exactly what is happening with their new Texas facility. AOI is not building something new. They are replicating the same factory format, tooling layout, automation systems, and process templates that are already running and yielding in Taiwan. The institutional knowledge, the yield data, the calibration baselines all of it transfers. A semiconductor company standing up a new node from scratch has none of that. AOI walks in with the answer key. Vertical integration across lasers, PCBA, and final assembly means there is no external dependency introducing variance into yield. They own the entire feedback loop from wafer to finished transceiver. That matters because of what the real bottleneck actually is. Most people stop the analysis at fab timelines or InP supply. Both are real constraints. Neither is the hardest part. The hardest part is thermal qualification. A transceiver operating inside a hyperscaler switch runs continuously. These switches need to operate at full load 24 hours a day for the unit economics to justify the infrastructure spend. If the switch is down, the compute behind it is idle. At the scale hyperscalers operate, idle compute is not an inconvenience it is a direct hit to the return on billions of dollars of capex. The failure mode that defines vendor selection is thermal. Transceivers generate heat. Heat degrades the laser. A degraded laser causes signal loss. Signal loss in a switch port takes that segment of the switching fabric offline. Hyperscalers do not tolerate partial switch failures they replace the vendor. This is why qualification cycles are the longest stage of the entire ramp, not manufacturing. Hyperscalers test interoperability, sustained thermal performance, and reliability under continuous full load before committing volume. A vendor that cannot demonstrate 24/7 thermal stability does not get the contract regardless of how fast they built the factory. AOI's vertical integration is a direct solution to this problem. Because they control lasers, PCBA, and assembly in-house, they control the thermal envelope of the finished product end to end. Competitors are integrating components from separate vendors and discovering thermal variance late in qualification. AOI is designing the thermal system, not assembling one from parts. Their automated testing infrastructure means thermal issues surface during production, not during the customer's qualification cycle. That compresses the single longest stage in the entire ramp. And because the Texas facility is a copy of Taiwan, that thermal system arrives pre-validated. They are not learning how to build a thermally stable transceiver in Texas. They already know. They are just doing it closer to the customer. Memory Manufacuturing bottleneck = lithography Photonics Manufacuturing bottleneck = thermal qualification The structural thesis is sound. But there is always a layer of entropy no model accounts for. Execution risk does not disappear because the framework is good.

English
1
3
38
14.4K
RJC
RJC@RJCcapital·
@pyhrroll Not including RJC Capital is criminal
English
1
0
0
93
Lin
Lin@pyhrroll·
60-Day Median Return (Long), Top 10: Global Tech Research (33 calls): +20.4% Citrini Research (19): +14.9% FundaAI (158): +11.2% SemiAnalysis (45): +8.2% BEP Research (52): +8.0% Dick Capital (44): +7.6% Irrational Analysis (41): +7.4% Fabricated Knowledge (91): +6.0% Altay Capital (55): +5.5% TMT Breakout (529): +5.1%
Lin tweet media
English
5
5
88
103.1K
Lin
Lin@pyhrroll·
My previous post went viral on Reddit overnight (8.5k upvotes in total) so here is an update: I now spend $13,000/year on 31 paid Substack newsletters so you don’t have to. Here’s who actually makes money 🧵
Lin tweet media
English
10
8
81
4.3K