

Man of Tao
9K posts

@explrilearning
be water https://t.co/8aCegWUBCq









RFK just announced that he is planning to move 12 peptides from Category 2 to Category 1. If you are new to peptides, you can find useful information on this blog. peptidepeppers.com/articles/pepti… Disclaimer: We do not sell, prescribe, or distribute peptides. Any discussion of peptides is for informational and educational purposes only and should not be considered medical advice or an offer to provide such products.


Michael Phelps won 23 Olympic gold medals using a mental technique most athletes ignore: "The biggest thing that really separated me through my career was my mental game. Everything that was in between my ears." Michael explains how he used visualization: "When I would visualize, I'd visualize every single thing getting up to a meet, probably a month or so in advance. What could happen. What I want to happen. And what I don't want to happen. Because when it happened, I was prepared for it." He describes the goal: "When I got to a swim meet, there's nothing I can control at that point except what I do. I can't control what anybody else does. So I want to know how the race could go, how I don't want the race to go, and in a perfect world, how the race should go. So I could get behind the block and not have to think about anything." His coach Bob Bowman reveals how they trained this skill: "When Michael was young, I gave his mom a book of progressive relaxation. Before he'd go to bed at night, she would read this progression of things: clench your fists, work through your whole body. He got so good she'd just open the book, say two things, and he'd be asleep." Bowman explains why visualization works: "The brain cannot distinguish between something that's vividly visualized and something that's real. By the time Michael steps up on the block at the Olympics, he's swum that race hundreds of times in his mind. All he has to do is shut everything down and it goes on autopilot." Michael adds the key detail most miss: "When I would visualize, it would be what you want it to be, what you don't want it to be, what it could be. So you're always ready for anything. If I have a suit rip, fine, I need another suit, put it on. Any small thing that could go wrong, I'm ready for."






$AMD NEEDS $TSM to hit $1,000 a share 🧵 Understanding 2nm Wafer Capacity ✍️ Context:Current EPYC CPU demand is 15-20m units, and is projected to double from H2 2026 to H1 2027. Meaning Supply is way behind from Demand, as companies are using entire year token budget in less than 30 days. Meaning, Agentic AI is consuming high CPU resources because it operates through continuous, multi-step Reasoning-Action-Observation loops rather than single-pass inference. TSMC's N2 (2nm) early phase production launched in late 2025 has already positioned the company for an accelerated ramp far beyond typical new-node timelines, directly supporting bullish outlook on scaling from ~25-35k to 60k+ wafers per month and unlocking millions of extra AMD EPYC Venice server CPUs. This is the first GAA/nanosheet node, with volume production kicking off on schedule (and with strong initial yields) at dedicated fabs in Taiwan. Combined output from the first lines started at ~35k-40k WPM (some early 2026 reads put it at 40k-60k aggregate). Yields opened strong at ~70% (with Taiwan sources reported of 75-80% overall and higher on certain layers) well above historical new-node averages. This "stable yield" is the key technical enabler for the post's "faster than standard" ramp, as it allows parallel equipment qualification and minimal downtime. 1. How many 2nm Fabs are Online and Planned TSMC's N2 production is centered in Taiwan (with U.S. contributions later). Fabs here are massive "Gigafab" sites with multiple phases (P1, P2..), each adding tens of thousands of WPM. Current(April 2026): We have 2 Primary Fabs for 2nm ~Fab 20 (Hsinchu Baoshan): Phase 1 online since late 2025. Initial ~20k-25k WPM contribution. ~Fab 22 (Kaohsiung): Phase 1 online (first to start volume production in Dec 2025); Phase 2 now in trial production / early ramp. As of now, 35k-60k WPM output is expected, and the are fully booked for 2026. TSMC aggressive Expansion which will help $AMD to service explosive Agentic AI demand Fab 20: P3 and P4 under construction (for N2 and below-2nm). Fab 22: P3 mostly complete, P4/P5 already breaking ground all five phases targeted fully operational by Q4 2027. Additional new sites: Three more in Tainan + expansions in central/northern parks ( Taichung, Chiayi). TSMC Target: Aggregate N2 capacity 120k-150k WPM (2026). This is a ~3-4x jump from early 2026 levels in roughly 9-12 months beating my own conservative "1+ year per major phase" typical timeline thanks to 70-80% yields and capex ($60B+ baseline for 2026, potentially higher). Arizona Fab 21 Phase 3/4 (N2-capable) targeted for equipment install in 2026 and production ramp 2027-2028 (initial ~20k WPM for N2 in some phases). TSMC has committed ~30% of future sub-2nm output to U.S. fabs long-term. => 2 fabs online, scaling 4 more fabs/phases toward end of 2026, and actively planning 10 fabs in the next 12-24 months. Timeline could go down with higher demand from AMD 3. The question is, how much TSMC can ramp up supply to support AMD significant demand for EPYC Venice in H2 2026? Venice is AMD's first HPC product on N2 (taped out April 2025, silicon validated with excellent perf/efficiency), using a chiplet design with N2 CCDs + 3nm I/O dies and advanced 3D SoIC packaging. Helios racks (AMD's new rack-scale AI platform) represent committed baseline volume( @OpenAI , $Meta, $MSFT, $ORCL, LumaAI...) , but TSMC's accelerated multi-fab expansion provides substantial headroom for standalone EPYC sales, hyperscaler direct purchases, and further Helios upside. According to TSMC CEO C.C. Wei, he expects "Faster ramp in 2026" due to higher yield and parallel fab buildouts on 2nm. 1m additional EPYC Venice would require 20-25k wafers(estimate). It is possible to ramp up 7-10m additional EPYC Venice to service Agentic AI demand or roughly 13-15k additional wafers per month from H2 2026 to H1 2027. This will enable AMD to: ~Fulfill hyperscaler Venice dense rack orders faster. ~Capture standalone server CPU share (vs. Intel). ~Support "Agentic AI" pricing power and multi-year data-center growth. Conclusion: TSMC’s accelerated N2 (2nm) ramp already delivering strong ~70-80% yields from the two primary Gigafabs (Fab 20 in Hsinchu and Fab 22 in Kaohsiung) that came online in late Q4 2025 makes the addition of 7-10 million extra EPYC Venice units for AMD from H2 2026 through H1 2027 not only technically feasible but a highly probable outcome of disciplined execution under explosive AI/HPC demand. Aggregate N2 output is scaling from the current ~35k-50k wafers per month (WPM) baseline to 120k-150k WPM by year-end 2026, with four+ effective plants/phases contributing and further phases (including Fab 20/22 expansions plus new Tainan sites) coming online in parallel. This delivers tens of thousands of incremental WPM precisely during the 12-month window in question compressing the “typical” 1+ year per major phase timeline thanks to stable yields and $52-56 billion in 2026 capex Each ~76 mm² Zen 6 CCD (12-core/48 MB L3 standard; up to 32-core Zen 6c dense variant) yields ~450-640 good dies per 300 mm wafer at current-to-mature N2 yields. With Venice’s chiplet design (typically 8 CCDs + I/O die + advanced 3D SoIC packaging), blended output across the full SKU mix lands in the realistic 40-60 CPUs per wafer range during ramp. Even conservatively allocating AMD 10-15%+ of incremental N2 capacity , the math easily supports multi-million extra units on top of the committed Helios rack baseline. TSMC isn’t just hitting its J-curve; it is breaking out of J-curve the historical new-node ramp curve because yields are strong, fabs are being stood up in parallel, and AI urgency is driving prioritization and capex flexibility. This supply tailwind directly validates and amplifies the multi-year bullish case for AMD turning 2026-2027 into a revenue and margin inflection point driven by Agentic AI server demand, pricing power, and share gains. Not Financial Advice!

@Scobleizer what's UP!! Did you have anything to do with the change to X API cost for get calls?? I got the following email on Thursday... Hello X API developers, We’re excited to announce an update to our X API pricing that makes accessing your own data more affordable than ever. Owned Reads are requests made by your own developer app for your own posts, bookmarks, followers, likes, lists & more. Starting Monday, April 20, 2026, these endpoints will be priced at $0.001 per request (equivalently, 1,000 resources for $1): GET /2/users/{id}/bookmarks GET /2/users/{id}/blocking GET /2/users/{id}/muting GET /2/users/{id}/pinned_lists GET /2/users/{id}/tweets GET /2/users/{id}/mentions GET /2/users/{id}/liked_tweets GET /2/users/{id}/followers GET /2/users/{id}/following GET /2/users/{id}/owned_lists GET /2/users/{id}/followed_lists GET /2/users/{id}/list_memberships This change significantly lowers the cost of common operations such as fetching your own posts, followers, likes, bookmarks, lists, and more. Additional updates effective Monday, April 20, 2026: Writes via X API will increase to $0.015 per post (from $0.01). This applies to the main posting endpoint: POST /2/tweets. Posting a URL via X API will be priced at $0.20 per post, except for summoned replies (which will remain at $0.01). Following, Likes, and Quote-Posts via API Writes will be removed for all self-serve tiers. This affects the following actions:POST /2/users/{id}/following (and DELETE for unfollow) POST /2/users/{id}/likes (and DELETE for unlike) Quote-posting via POST /2/tweets (when using the quote_tweet_id parameter) These adjustments reflect our ongoing commitment to supporting the developer community while ensuring sustainable platform operations and helping you build even better experiences on X. For full pricing details, including the complete rate card and updated documentation, visit the X API Pricing page.

