DWansss✳️

11.2K posts

DWansss✳️ banner
DWansss✳️

DWansss✳️

@wanchasss

MIRACLE INVESTMENT ✨📈

กรุงเทพมหานคร, ประเทศไทย Katılım Ekim 2017
837 Takip Edilen282 Takipçiler
Sabitlenmiş Tweet
DWansss✳️
DWansss✳️@wanchasss·
No.1 ในดวงใจนี่ยกให้ Paragon ของกลุ่มเดอะมอลล์นะ เป็นห้างที่ไม่ใหญ่มากแต่ความ Luxury ไม่น่ามีใครเทียบได้อะ
DWansss✳️ tweet media
ไทย
34
24.1K
3.8K
0
DWansss✳️ retweetledi
Nebius
Nebius@nebiusai·
Today, we announced our Q1 2026 financial results. Here are the highlights: - ARR grew 674% year-over-year; full-year guidance has been updated to ARR of $7-$9 billion and revenue of $3.0-3.4 billion. - Adjusted EBITDA margin in our AI cloud business nearly doubled quarter-on-quarter to 45%. - Contracted capacity now exceeds 3.5 GW, surpassing our 3 GW target; we now expect to have more than 4 GW of contracted capacity by the end of 2026. We also announced today that we have secured up to 1.2 GW of power and land for a new owned AI factory in Pennsylvania, bringing our total number of sites exceeding 100 MW to seven. Read more in our press release: nebius.com/newsroom/nebiu…
Nebius tweet mediaNebius tweet mediaNebius tweet mediaNebius tweet media
English
91
420
2.7K
451.9K
DWansss✳️ retweetledi
ETF Tracker
ETF Tracker@TheETFTracker·
🚨 9 NEW ETFS NOW TRADING Leverage Shares has launched 9 new 2X Long single stock ETFs on the Cboe: $COHH - 2X Long Coherent ETF $HONG - 2X Long Honeywell ETF $AMAU - 2X Long Applied Materials ETF $STXU - 2X Long Seagate ETF $CIEN - 2X Long Ciena ETF $ETNG - 2X Long Eaton ETF $AAOG - 2X Long Applied Optoelectronics ETF $CATG - 2X Long Caterpillar ETF $SNDG - 2X Long SanDisk ETF Disclosure: Leverage Shares is a WOLF Financial Partner. This is for informational purposes only, not investment advice. Always read the funds prospectus.
ETF Tracker tweet mediaETF Tracker tweet mediaETF Tracker tweet mediaETF Tracker tweet media
Leverage Shares ETFs@LeverageETFs

Introducing the new 2x daily leveraged ETFs by @ThemesETFs - out now on @Cboe ! Check out the full list below. Investment involves significant risk including potential loss of principal. Performance multiplier could include both positive and negative performance.

English
4
6
64
47.1K
DWansss✳️
DWansss✳️@wanchasss·
พวกที่บอกลงทุนง่ายจะตาย อาทิตย์เดียวเดือนเดียวได้มาแล้ว 10-20% กูบอกได้คำเดียว….เดี๋ยวมึงก็รู้ 2-3 ปีมันวัดผลไม่ได้หรอกนะ55555
ไทย
0
34
45
6.1K
DWansss✳️ retweetledi
Steve
Steve@StevenZWang·
Updates on $DXYZ & $VCX since last Friday: - DXYZ premium rose moderately to 32% while VCX now commands a very rich premium of close to 200% - Anthropic primary round getting closer to being announced. Secondary market transactions clearing at well above $1T - SpaceXAI + Anthropic compute partnership announced - should strengthen valuation for both Attaching my dashboard updated with today's (5/6)closing prices. NFA. YMYD.
Steve tweet media
Steve@StevenZWang

$DXYZ is the best public market Anthropic proxy and remains one of the best SpaceXAI proxy: - ~35% of NAV in Anthropic - ~22% of NAV in SpaceX & xAI - Trading at almost no premium vs VCX 75%+ premium #NoFinancialAdvice#

English
8
17
68
43.1K
DWansss✳️
DWansss✳️@wanchasss·
ส่วนตัวใช้กลยุทธ์ลงทุนแบบจำกัดความเสี่ยงมากขึ้นแลกกับพอร์ตที่วิ่งช้าลง ใช้ Fear & Greed index ถ้าเกิน 75 เมื่อไหร่จะเริ่มทำประกันให้พอร์ต เพราะถือว่าการรักษาทุนสำคัญพอๆกับการเร่งสร้าง return 😆 เงินมันหายาก
ไทย
0
0
0
39
DWansss✳️
DWansss✳️@wanchasss·
ซึ่งอันนี้ต้องวิเคราะห์ลึกหน่อย Position sizing ของหุ้นนั้นๆใหญ่จนเพิ่มความเสี่ยงให้พอร์ตหรือยัง บางที่ตัวที่เหมือนดูแพงอาจจะยังถูกอยู่ ตัวที่ถูกตลาดก็ให้เหตุผลเสมอว่าทำไมถึงไม่ให้ premium
ไทย
1
0
0
25
DWansss✳️
DWansss✳️@wanchasss·
ถ้าถามว่าเวลาไหนเราควรขายหุ้น อันนี้ค่อนข้างบอกได้ยาก ขึ้นอยู่กับผลตอบแทนและความเสี่ยงที่เรารับได้รวมถึงต้นทุนทางอารมณ์ของแต่ละคนด้วย เราถามตัวเองว่า Thesis ยังเหมือนเดิมไหม Upside เหลือเทียบกับ Risk มากแค่ไหน มีโอกาสที่ดีกว่าในการย้ายเงินไหม ตลาดอยู่ช่วงไหน
lordachita - ai/acc@0xlordachita

อยากได้ไอเดียครับว่าควรขายหุ้นเมื่อไหร่ดี ช่วงนี้เห็นคนโชว์พอร์ตเต็มไปหมด น่ากัวมาก 😂

ไทย
1
0
1
182
DWansss✳️
DWansss✳️@wanchasss·
สะท้อนว่าไทยได้อานิสงค์จาก AI Supply Chain และการย้ายฐานการผลิต แปลว่าเศรษฐกิจไทยทั่วไปยังไม่ได้แข็งแรง แต่เศรษฐกิจใหม่กำลังโตเร็วมาก เงินจะไหลเข้าสู่หุ้นที่เชื่อมโยงกับ AI Infrastructure ขณะที่หุ้นที่อิงกำลังซื้อในประเทศอย่างเดียวจะยังฟื้นตัวช้าต่อไป
ไทย
0
0
0
24
DWansss✳️
DWansss✳️@wanchasss·
แต่ Moody’s ยังคงเตือนว่าไทยยังมีปัญหาเชิงโครงสร้างระยะยาวอย่างสังคมสูงวัย หนี้ครัวเรือนสูง Productivity ต่ำยังคงกดดันการเติบโต การส่งออกที่เร่งตัวโตถึง 18.7% ในเดือนมีนาคมมี AI และ Electronics เป็นตัวขับเคลื่อนหลัก
ไทย
1
0
0
27
DWansss✳️
DWansss✳️@wanchasss·
DBS มีมุมมอง Selective bullish ในหุ้นไทย ตลาดไม่ได้เข้าสู่ภาวะฟื้นตัวแบบทั้งกระดาน แต่จะเป็นช่วงที่หุ้นบางกลุ่มจะได้ประโยชน์จาก Mega trend การลงทุนรวมถึงนโยบายภาครัฐ
DWansss✳️ tweet mediaDWansss✳️ tweet media
ไทย
1
0
0
31
DWansss✳️ retweetledi
TheValueist
TheValueist@TheValueist·
$VICR $CRBS Vicor–Cerebras Relationship and Business Exposure: Deep-Dive Analysis Executive Overview Vicor’s relationship with Cerebras appears to be one of the highest-quality strategic design-win exposures in the AI power-delivery ecosystem. The relationship is not a generic merchant-component supply arrangement. It is a deep architecture-level power-delivery collaboration that began with Cerebras’s first wafer-scale engine and has likely evolved into Vicor’s current “lead computing customer” ramp for vertical power delivery. The key confirmed fact is that Cerebras and Vicor jointly implemented a vertical power delivery architecture for the original Cerebras WSE, with Vicor-linked commentary stating that Cerebras’s 15kW wafer-scale processor required uniform high-current delivery and that the Cerebras/Vicor VPD approach reduced power delivery network resistance by more than 50%.  The key current inference is that Cerebras is almost certainly the unnamed “lead computing customer” referenced repeatedly by Vicor in late 2025 and Q1 2026. Vicor stated on its Q1 2026 call that its “lead computing customer is continuing a steep production ramp of its wafer-scale engine with best-in-class AI inference performance.” That language is highly specific and maps directly to Cerebras’s product branding and market positioning: Cerebras calls its core chip the Wafer Scale Engine, its CS-3 system is powered by a 4 trillion-transistor WSE-3, and AWS/Cerebras marketing describes CS-3 as optimized for decode and “the world’s fastest AI inference system.” Vicor has not explicitly named Cerebras in the 2026 call transcript, so this should be treated as a high-conviction inference rather than a legally confirmed customer-disclosure datapoint.  The investment implication is significant. If Cerebras converts its OpenAI, AWS, and sovereign AI demand pipeline into shipped CS-3 or next-generation systems, Vicor should participate through high-value power modules and potentially future licensing or second-source economics. The exact revenue contribution, bill-of-material content, pricing, and gross margin from Cerebras are not disclosed. However, the scale mismatch is clear: Cerebras is pursuing AI infrastructure commitments measured in hundreds of megawatts and billions of dollars, while Vicor’s total 2025 Advanced Products revenue was $248.6M, including $57.4M of royalties, and Q1 2026 Advanced Products plus royalty revenue was $64.9M. A successful Cerebras ramp can therefore be material to Vicor even if Vicor captures only a modest fraction of the CS-3 system value.  The risk is that Vicor’s current equity value already discounts a large AI/VPD inflection. At a market cap of approximately $12.1B and a P/E ratio of approximately 85.8x, Vicor is being valued far ahead of its historical power-module profile. On management’s nearly $570M 2026 revenue guide, the stock trades at roughly 21.3x forward revenue before any enterprise-value adjustment. That multiple can be justified only if Cerebras-like ramps broaden into a durable AI power-delivery platform with strong margins, capacity expansion, and licensing leverage. It is vulnerable if Cerebras deployment timing slips, if Vicor capacity becomes a bottleneck, if second sourcing dilutes economics, or if the company’s VPD advantage proves narrower than current market enthusiasm suggests.  What is confirmed versus what is inferred The confirmed relationship is clear at the first-generation WSE level. A Vicor-authored 2020 article stated that Cerebras’s WSE was composed of 84 processing cells spanning an entire wafer, functioned as a single chip, and was rated at 15kW, requiring “an advanced power architecture whereby power is applied uniformly to each cell at extremely high currents.” The same article stated that “Cerebras working in conjunction with Vicor implemented a Vertical Power Delivery (VPD) architecture,” and that this architecture reduced PDN resistance by more than 50%. This is the strongest primary-source evidence that Vicor was not merely an off-the-shelf supplier, but was involved in the power architecture enabling the WSE package.  A Vicor-hosted version of the same article also directly frames the relationship as Vicor “helping Cerebras achieve new levels of processing power,” while reiterating that the Cerebras WSE required uniform high-current delivery across the wafer. This is important because it confirms that Vicor itself has historically highlighted Cerebras as a reference customer for AI/HPC power-delivery innovation.  The inferred current relationship is based on Vicor’s latest customer language. In Q4 2025, Vicor said that a “lead customer for VPD solutions” was ramping a Gen 4 factorized power system before transitioning to a Gen 5-based solution with higher current density and performance, with the transition expected to start in H2 2026. In Q1 2026, Vicor sharpened that language, stating that the lead computing customer was ramping a “wafer-scale engine with best-in-class AI inference performance.” There are very few plausible companies that fit that wording. Cerebras is the company whose product is branded as the Wafer Scale Engine and whose current go-to-market messaging is centered on high-speed inference.  The missing datapoints are just as important. Vicor does not disclose Cerebras revenue, Cerebras backlog, per-system content, product SKUs, gross margin by customer, or whether revenue flows directly from Cerebras or through contract manufacturers. Vicor’s 10-K states that Advanced Products customers are concentrated in data center and hyperscaler enterprise computing and that a substantial portion of Advanced Products revenue has historically come from a limited number of customers, but it does not identify Cerebras. The 10-K also shows that one customer accounted for approximately 11.1% of total net revenues in 2025, but that customer is not named and should not be assumed to be Cerebras.  Why Vicor matters to Cerebras technically Cerebras’s architecture creates one of the most extreme power-delivery problems in commercial computing. The WSE-3 is not a conventional accelerator card with a GPU and HBM stack. It is a wafer-scale processor with 4 trillion transistors and 900,000 AI cores, and Cerebras describes the CS-3 as consuming up to 23kW. That is the power level of an entire high-density server concentrated around a single wafer-scale compute engine. Conventional lateral power delivery becomes increasingly inefficient and physically constrained when power must be delivered at sub-1V core voltages and very high current levels across a dense processor package.  Cerebras’s own system description reinforces the need for an unusual power architecture. The company describes the CS-3 “engine block” as a wafer packaging solution that delivers power “straight into the face of the wafer” to achieve power density that “could not be achieved with traditional packaging,” while also providing uniform cooling through a closed internal water loop. That description is highly consistent with Vicor’s VPD approach, which moves final-stage current multiplication vertically close to the load instead of routing massive low-voltage current laterally across a PCB or substrate.  Vicor’s technical architecture is based on Factorized Power Architecture, or FPA. In Vicor’s formulation, FPA separates the power-conversion process into regulation and current multiplication. Regulation occurs first, then a relatively high voltage is distributed across the board, and current multiplication occurs adjacent to the load. The advantage is that high voltage can be moved more efficiently over distance, while very high current is generated only at the point of load, minimizing distribution losses and reducing thermal and impedance challenges.  The power-delivery challenge becomes more severe as processors scale. Vicor’s 10-K states that the advantages of FPA are most evident in high-performance computing applications, where GPUs and AI ASICs require high average and peak current. Vicor’s Power-on-Package architecture places current multiplier modules directly on the substrate to reduce losses and free package pins for other functions, and its latest VPD innovation mounts high-performance solutions beneath the motherboard opposite the GPU or ASIC to further reduce distribution losses and improve power density.  Vicor’s own technical literature explicitly calls out Cerebras-like clustered ASIC architectures. Vicor states that tightly packed processor clusters leave little room for lateral power delivery and that VPD is “essential” to provide high currents. It also notes that clustered ASIC approaches from Cerebras and Tesla create significant power-delivery and thermal-management challenges, and that delivering peak currents above 1,500A per core can require more than 30 phases per AI ASIC or GPU under conventional multiphase designs, a number Vicor characterizes as difficult, if not impossible, with lateral power delivery.  This is the fundamental reason the relationship matters. Cerebras’s performance pitch depends not only on compute architecture but also on the ability to feed an enormous wafer-scale engine with stable, low-noise, high-density power while simultaneously cooling the wafer. Vicor is not a peripheral vendor in that system. It is likely part of the enabling architecture that allows Cerebras to turn a wafer-scale design into a deployable product. Why the relationship appears to be moving from validation to scale Historically, Cerebras was a highly differentiated but relatively low-volume customer. The earlier WSE systems were strategic proofs of concept, sovereign AI systems, national lab deployments, and select cloud offerings. That limited the immediate commercial scale for Vicor, even if the design win was technically important. The current setup is different because Cerebras now has large public demand signals: OpenAI, AWS, and a second IPO attempt tied to a much larger revenue base. Reuters reported that Cerebras revenue increased to $510M in 2025 from $290.3M in 2024, and that Cerebras has tied much of its growth to OpenAI through a $20B multi-year deal under which OpenAI will deploy 750MW of Cerebras chips. Reuters also reported that Cerebras is targeting a valuation of up to approximately $26.6B in its U.S. IPO by offering 28M shares at $115–$125, aiming to raise approximately $3.5B.  AWS is the second major validation point. AWS and Cerebras announced a March 2026 collaboration to deploy CS-3 systems inside AWS data centers and make the solution available through Amazon Bedrock. The architecture disaggregates inference into prefill and decode, with AWS Trainium optimized for prefill and Cerebras CS-3 optimized for decode. AWS’s David Brown said the system is designed to produce inference “an order of magnitude faster and higher performance than what’s available today.” This matters for Vicor because AWS hosting Cerebras hardware turns Cerebras from a specialty hardware vendor into a potentially scaled cloud infrastructure supplier.  Vicor’s 2026 commentary aligns with this Cerebras ramp. In Q1 2026, Vicor reported product and royalty revenue of $113M, up 20.2% year over year, and one-year backlog of $300.6M, up 70% sequentially. Management guided to nearly $126M of Q2 revenue and nearly $570M of 2026 revenue. In the same discussion, management linked strong bookings to high-performance computing and specifically cited the lead computing customer’s steep wafer-scale engine ramp.  The timing also lines up with Vicor’s product generation roadmap. In Q4 2025, Vicor said its lead VPD customer was ramping Gen 4 before transitioning to Gen 5 in H2 2026. In Q3 2025, Vicor said its Gen 5 VPD solution for the lead customer had met target specifications and was progressing to a Q1 2026 production launch. In Q1 2026, Vicor described a second-generation VPD solution with 3A/mm² current density, current multiplication up to 40, and a 1.5mm-thin package, with the next-generation transition expected to begin before year-end. The most coherent interpretation is that Vicor is currently shipping and ramping Gen 4 for Cerebras-like systems while preparing Gen 5/second-generation VPD for future Cerebras and other AI ASIC platforms.  Commercial model: how Cerebras likely monetizes for Vicor Vicor’s business with Cerebras likely has 3 monetization layers. The first is product revenue from high-density power modules or current-multiplier components used in Cerebras systems. The second is non-recurring engineering or design support tied to new WSE generations and package integration. The third is potential licensing or alternate-source economics if Vicor’s VPD IP becomes embedded in broader AI accelerator designs or if customers require licensed second sources. Product revenue is the most direct path. Vicor’s Advanced Products category is the relevant segment because it includes the proprietary FPA and power-delivery products used in high-performance computing. In 2025, Vicor reported $248.6M of Advanced Products revenue, including $151.5M from direct customers, contract manufacturers, and non-stocking distributors; $34.4M from stocking distributors; $4.2M from NRE; and $57.4M from royalties. This indicates that a large compute customer could show up in several buckets depending on purchasing structure, but most likely in direct/contract-manufacturer revenue and potentially NRE.  The second path is capacity utilization and manufacturing leverage. Vicor’s Q1 2026 backlog of $300.6M is nearly 70% higher than year-end 2025 backlog, and the company stated that the backlog represents orders scheduled within the next 12 months. Management also said Fab One had previously been earmarked for approximately a $1B annual revenue run rate, but now appears capable of supporting at least $1.5B through cycle-time and process-step improvements. If Cerebras is the lead ramp, Cerebras demand is helping Vicor move from a historically underutilized or unevenly utilized Advanced Products manufacturing model toward higher fab absorption and margin expansion.  The third path is licensing. This is less directly tied to Cerebras but strategically important. Vicor has stated that its existing licensing revenue does not yet include vertical power; it stems from asserted IP around earlier power-module patents. It also stated that it has “lots of patents with respect to VPD power package” and that those have not yet been asserted. In Q1 2026, management said licensing could eventually reach as much as 50% of product revenues and be nearly 100% margin. For Cerebras specifically, this matters because the Cerebras design win can serve as proof that Vicor’s VPD architecture works at the most demanding end of the AI power spectrum, strengthening Vicor’s leverage with hyperscalers, OEMs, and potential second-source partners.  Potential revenue magnitude The exact Vicor content per Cerebras system is not publicly disclosed. Any attempt to assign a precise dollar value per CS-3 would be speculative. The correct analytical approach is to frame scale sensitivity rather than assert a false precision estimate. The scale sensitivity is large. Cerebras says CS-3 consumes up to 23kW. Reuters reported that OpenAI’s Cerebras arrangement involves 750MW of Cerebras chips. A simple power-equivalent calculation implies that 750MW divided by 23kW per CS-3 equals approximately 32,609 CS-3-equivalent systems. A 2GW figure would equal approximately 86,957 CS-3-equivalent systems. These are not shipment forecasts because the 750MW commitment may refer to deployed compute capacity, could include non-CS-3 or future-generation systems, may be affected by PUE, rack-level infrastructure, redundancy, utilization, and contract timing, and does not identify the unit-level hardware mix. The calculation is still useful because it shows the order-of-magnitude mismatch between Cerebras’s planned deployment scale and the historical scale of Vicor’s Advanced Products business.  The more practical conclusion is that Vicor does not need to capture an extraordinary percentage of Cerebras system value for the relationship to matter. Vicor’s entire 2025 Advanced Products revenue was $248.6M, and total 2025 revenue was $407.7M. If Cerebras/OpenAI/AWS deployment demand becomes real production volume, even a low single-digit percentage value capture within the power-delivery subsystem could become a material revenue stream relative to Vicor’s current size. Conversely, if Cerebras deployment ramps slower than expected, Vicor’s backlog and AI-growth narrative could prove too optimistic relative to the current equity multiple. 
TheValueist@TheValueist

RESEARCH NOTE - Cerebras April 2026 S-1 and Potential IPO: Commercial Progress Is Real, But Concentration and Infrastructure Execution Still Drive the Underwriting Burden atlaspeakresearch.com/report/31f3bf sec.gov/Archives/edgar… Bottom Line: The April 2026 S-1 shows that Cerebras has advanced materially from the withdrawn 2024 filing. The company is no longer asking investors to underwrite a narrowly framed wafer-scale hardware story built primarily around a single Abu Dhabi counterparty. Revenue scale is now real, OpenAI and AWS provide meaningful strategic validation, and the commercial model is increasingly oriented around specialized inference infrastructure rather than only on-premises hardware shipments. Cerebras is therefore more commercially relevant, more strategically embedded, and more difficult to dismiss than it was in the original IPO attempt. That said, the underwriting burden remains high. Revenue, receivables, financing, and future commercial execution are still tied to a concentrated counterparty stack that runs through MBZUAI, G42, OpenAI, and AWS, while the business now also carries long-duration infrastructure commitments, service-level liability, site and localization constraints, margin pressure from cloud buildout, and a more shareholder-unfriendly governance and dilution structure. The right conclusion is not that the story is broken, but that Cerebras has evolved into a more credible and more complex inference-infrastructure platform whose upside is real and whose operational, contractual, and capital-structure risks remain too large to ignore.

English
11
14
108
44.2K