Man of Tao

9K posts

Man of Tao banner
Man of Tao

Man of Tao

@explrilearning

be water https://t.co/8aCegWUBCq

Bangkok, Thailand Tham gia Aralık 2014
5.9K Đang theo dõi525 Người theo dõi
Man of Tao đã retweet
Dan Go
Dan Go@CoachDanGo·
How to make it: 1. Prep this the night before (you'll see why) 2. Put 300g Greek Yogurt into a bowl with 1 scoop whey protein, along with a tablespoon of psyllium husk, and mix 3. Add 100g frozen fruit of choice 4. Put in fridge overnight Overnight, the psyllium husk expands while the frozen fruit thaws, giving you a hearty breakfast you can eat right out of the fridge.
English
7
4
98
95.4K
Man of Tao đã retweet
CooperBaggs 💰🍞
CooperBaggs 💰🍞@edgaralandough·
How to restart your life again (in six simple, and easy steps): 1. Accept where you are
English
8
63
609
177.2K
Man of Tao đã retweet
Kyronis
Kyronis@kyronis_talks·
🚨 SHOCKING: Grok can now do your ENTIRE WEEKS of meal prep like a full paid team of Mayo Clinic + Stanford dietitians on 24/7 call… completely FREE. I ran Grok for weeks straight and it destroyed my old results: - perfectly dialed-in macros - zero decision fatigue - saved me $500/month on groceries - habits that actually stick long-term
Kyronis tweet media
English
20
32
44
1.8K
Man of Tao đã retweet
Ihtesham Ali
Ihtesham Ali@ihtesham2005·
A Stanford mathematician spent 40 years watching brilliant students freeze in front of hard problems. Not because they lacked intelligence. Because nobody had ever taught them what to do before they started solving. His name is George Pólya, and the book he wrote in 1945 has never gone out of print. It has sold over a million copies. Marvin Minsky, the man who built the first neural network machine at MIT, said publicly that everyone should know this work. Engineers, mathematicians, and computer scientists treat it as scripture. Most people have never heard of it. Here is the framework buried inside it that changed how I think about every hard problem I face. Pólya watched the same failure repeat itself across decades of students. A problem would be presented. The student would stare at it for a moment, feel the first wave of anxiety, and immediately start calculating. Not because calculating was the right next step. Because calculating felt like doing something, and doing something felt better than sitting with the discomfort of not knowing what to do. The calculation was almost always wrong. Not because the student lacked the skill to execute it. Because they had not yet understood what they were being asked. Pólya called this the most neglected step in all of problem solving, and he spent the rest of his career trying to make people take it seriously. Step one is to understand the problem. Not skim it. Not assume you know what it is asking because you have seen something similar before. Understand it. Completely. He gave students a specific set of questions to force this: What is the unknown? What are the given conditions? Can you draw a figure? Can you restate the problem in your own words without looking at it? That last one is the filter. If you cannot restate a problem in your own words, you do not understand it. You have only read it. Most people skip this entirely and wonder why they get stuck. Step two is to make a plan. Not to execute. To plan. Pólya documented every heuristic he could observe in successful problem solvers, and one pattern appeared more than any other. When a problem feels impossible, find a simpler version of it and solve that first. Not because the simpler version is the goal. Because solving it gives you a foothold, a method, a partial structure you can carry back to the original problem and build from. He phrased it with precision: if you cannot solve the proposed problem, try first to solve some related problem. Could you imagine a more accessible related problem? That question alone is worth more than most problem-solving courses. Step three is to carry out the plan. This is the step everyone thinks is the whole game. It is not. It is the third of four. And Pólya spent the least time on it because it is the most obvious. Once you understand the problem and have a plan, execution is mostly patience. Step four is the one almost nobody does. Look back. Not to check the arithmetic. To ask a different set of questions entirely. Can you verify the result by a different method? Can you use this result or this method to solve a different problem? What would you do differently next time? This is where the real learning lives and almost no one goes there. The look-back step is not about the problem you just solved. It is about building a library of methods that transfers to the next problem, and the one after that. Every expert problem solver Pólya studied had this habit. Every struggling student skipped directly from the answer to the next question on the page, carrying nothing forward, starting from zero every time. Pólya's deepest insight was not a technique. It was a diagnosis. The reason most intelligent people feel bad at problem solving is not that they lack the ability to reason. It is that they conflate understanding a problem with having read it. They conflate having a method with starting to work. They conflate getting an answer with having learned anything. These are not the same things. They never were. The students who get genuinely good at hard problems are not the ones who practice more. They are the ones who slow down at the beginning and the end, at the two moments every instinct tells them to rush. The problem is almost always not as hard as it looks at the start. You just haven't understood it yet.
Ihtesham Ali tweet media
English
74
1.1K
5.6K
273.2K
Man of Tao đã retweet
CyrilXBT
CyrilXBT@cyrilXBT·
I was paying $400/month for tools that Claude now does for free. The moment I discovered MCP servers my entire stack changed. 15 servers. One AI. Zero subscriptions. The solo builder era is just getting started.
Dami-Defi@DamiDefi

x.com/i/article/2042…

English
22
17
197
43.9K
Man of Tao đã retweet
🍓🍓🍓
🍓🍓🍓@iruletheworldmo·
anthropic have been shipping at an insane rate and it’s easy to fall behind, articles like this are super useful. make sure to bookmark so you can use them all.
Defileo🔮@defileo

x.com/i/article/2045…

English
4
17
258
50.9K
Man of Tao đã retweet
Milk Road
Milk Road@MilkRoad·
HUGE: @Coinbase just dropped a new agent-to-agent marketplace that will help accelerate Agential Commerce onchain. (Save this - you'll come back to it). Think of it like the app store, but for robots... Step 1: Your AI agent needs a capability it doesn't have. Maybe it needs real-time data, image generation, or trade execution. Step 2: Instead of you hunting down APIs and managing keys, your agent discovers the service it needs from the Agentic(.)Market. Step 3: The agent pays for and uses that service autonomously. Transaction complete. No human approval needed. The broader idea being... Agents won't use the same tools as humans - they'll need their own discovery layer built specifically for how they operate. Agentic(.)Market wants to be that layer - where agents can find, evaluate, and instantly connect to thousands of services using the x402 protocol - which is already doing NUMBERS: → 165M+ transactions. → ~$50M in volume. → ~100k services. → 480k+ agents already buying. All with zero API keys, zero accounts, and zero logins. One terminal command arms your agent with everything it needs to start! If you're building agents → you can now add capabilities without hunting down APIs one by one. If you're a service provider → you can list your services and start earning revenue from agent buyers immediately. Hell yeah!
nick.base.eth 🛡@Nick_Prince12

x.com/i/article/2046…

English
32
70
476
114.9K
Man of Tao đã retweet
Andrew D. Huberman, Ph.D.
The new Huberman Lab episode is out: How to Better Regulate Your Emotions | Dr. Marc Brackett (@drmarcbrackett) 0:00 Marc Brackett 2:55 Emotion Regulation 5:53 Emotion Mindset, Anxiety; Good or Bad Emotions? 11:25 Sponsors: Joovv & Lingo 13:54 Permission for Happiness; Gender, Emotion Suppression 22:13 Young Men, Vulnerability, Incapable; Gay Men 31:00 Boys & Men, Crying; Emotion Socialization 37:34 Sponsor: AG1 38:58 Physical Interaction; Rough/Tumble Play, Teaching Emotion Regulation 46:47 Emotion Calibration, Tools: Leaders & Being a Role Model; Meta-Moment 56:15 Meditation & Stress Tolerance, Tool: Label Emotions; Childhood 1:03:12 Sponsor: LMNT 1:04:32 Understand Your Assumptions, Tool: Intentional Co-Regulation 1:12:09 Vocabulary & Rethinking Emotion, Tool: Reframing 1:15:49 Emotional Intelligence Training, Self-Evaluation 1:22:15 Living with Discomfort & Emotional Intelligence 1:27:01 Marc's Work & Criticism; Emotion "Leakage" & Switching Mindset 1:34:19 Sponsor: Rorra 1:35:32 Excitement, Positive Emotion; Modern Concerns, AI & Disconnection 1:45:11 Major Societal Challenges & Everyday Progress 1:54:38 Physical/Emotional Identity & Envision Best Self, Tool: Meta-Moment 2:05:33 Emotional Intelligence 2:12:46 Curiosity & Compassion; Reflection, Identity 2:19:32 Point of Connection Game 2:25:02 Zero-Cost Support, YouTube, Spotify & Apple Follow, Reviews & Feedback, Sponsors, Protocols Book, Social Media, Neural Network Newsletter Includes paid partnerships.
English
22
49
390
165.6K
Man of Tao đã retweet
AI Edge
AI Edge@aiedge_·
Mastering Claude Cowork should be your #1 AI priority right now. Over the past few months, I've clocked 100+ hours on Cowork. This is the cheatsheet I wish I had from day one. How to master Claude Cowork - and get AI to do real work (as a beginner):
AI Edge tweet media
English
8
27
205
10K
Man of Tao đã retweet
Squiggly Hair Shanks
Squiggly Hair Shanks@redhairshanks86·
let me be ABUNDANTLY clear: every now and then, new opportunities arise - currently ai and robotics are the highest growth industries - the golden age of dropshipping was 2020 - 2022 - even when covid started, you had several months to become an instant multi millionaire with test centers and selling masks - crypto was a GODDAMN GOLDMINE in 2017 and the pattern is ALWAYS identical: - every new industry starts in EASY MODE - you can make tons of money with no technology and no edge. just being there will make you rich or at least semi-rich - then more players enter. serious players, investment bankers, meta and google devs, consultants, etc. competition increases - the industry becomes efficient. margins go down it’s LITERALLY always the same crypto today is almost efficient. stay in crypto if you want to, i will be in crypto as well, but DO NOT say NO to other opportunities or you will be left behind. ideally, you want to be EARLY to a new trend so you can make money on easy mode now here is the alpha: we already KNOW what the next rising industry will be. not the only one, obviously, but one of them it’s peptides peptides will get “legalised” in july 2026, in 3 months you have 3 months to get ready in this market. it really doesn't happen often that you get a 3 months headstart to prepare. you KNOW it will be a large trend. you also know that competition will be fierce, bc the double-coincidence of “new industry” and “really good ai tools” will make the first few months of peptides insanely competitive but you can become a millionaire if you know what to do and the very first thing to do is to READ UP on what they are to understand the market and to identify gaps. here is an article that summarises the status quo
Aurelius Health Group@Aurelius_Health

RFK just announced that he is planning to move 12 peptides from Category 2 to Category 1. If you are new to peptides, you can find useful information on this blog. peptidepeppers.com/articles/pepti… Disclaimer: We do not sell, prescribe, or distribute peptides. Any discussion of peptides is for informational and educational purposes only and should not be considered medical advice or an offer to provide such products.

English
80
217
3.3K
489.7K
Man of Tao đã retweet
Sosa | Mental Strategist
Sosa | Mental Strategist@MetaMorpehus·
Visualistion to upgrade your self image: Close your eyes. Become aware of the sensations in your body. Focus on how it feels to breathe slower. Do this as you slowly relax every muscle in your body. Head to toe. Within five minutes you'll notice a tingling sensation in your palms while you relax. Once that sensation arrives, you will be able to visually go within yourself to create permanent changes in the subconscious: transforming who you believe yourself to be. Now for the next few minutes, allow yourself to mentally recall a time where you won. >The first time you felt truly loved. >A risk you took that paid off. >The first time you realized you were capable of more than you thought. Experience the scene fully. You might notice warm feelings in your chest as you replay these memories. This is the good part. Fly to the future and imagine the greatest version of you. Notice how ASSERTIVE they stand. Notice how they appear. Sense their confidence. They've overcome the things that keep you up at night. They've built what you've only imagined. They live life knowing exactly who they are. Allow the image to become bigger, brighter... Time will slow as your subconscious examines every detail. Now imagine how it would feel if this was you right now. Picture yourself in their shoes. Can you feel it? Linger there for five minutes. You find yourself softly smiling knowing this is the happiest and most relaxed you have felt in a long time. Lie there for a while. Enjoy this moment. Know that you can return here whenever you wish, exactly to this place, where you feel exactly as you do now. All you have to do is close your eyes and imagine yourself back here. You feel rejuvenated by that thought. Open your eyes and interact with the world from this state. Believe it or not this is what you are eventually supposed to feel every single day. You will begin to notice that all the things you want to be are already within. Few take the time to practice this. Do this daily and you begin to rewire your mind. Change your beliefs. Think, act, and become the person you've seen glimpses of throughout your life.
Sosa | Mental Strategist tweet media
Jaynit@jaynitx

Michael Phelps won 23 Olympic gold medals using a mental technique most athletes ignore: "The biggest thing that really separated me through my career was my mental game. Everything that was in between my ears." Michael explains how he used visualization: "When I would visualize, I'd visualize every single thing getting up to a meet, probably a month or so in advance. What could happen. What I want to happen. And what I don't want to happen. Because when it happened, I was prepared for it." He describes the goal: "When I got to a swim meet, there's nothing I can control at that point except what I do. I can't control what anybody else does. So I want to know how the race could go, how I don't want the race to go, and in a perfect world, how the race should go. So I could get behind the block and not have to think about anything." His coach Bob Bowman reveals how they trained this skill: "When Michael was young, I gave his mom a book of progressive relaxation. Before he'd go to bed at night, she would read this progression of things: clench your fists, work through your whole body. He got so good she'd just open the book, say two things, and he'd be asleep." Bowman explains why visualization works: "The brain cannot distinguish between something that's vividly visualized and something that's real. By the time Michael steps up on the block at the Olympics, he's swum that race hundreds of times in his mind. All he has to do is shut everything down and it goes on autopilot." Michael adds the key detail most miss: "When I would visualize, it would be what you want it to be, what you don't want it to be, what it could be. So you're always ready for anything. If I have a suit rip, fine, I need another suit, put it on. Any small thing that could go wrong, I'm ready for."

English
4
134
934
28.7K
Man of Tao đã retweet
Charly Wargnier
Charly Wargnier@DataChaz·
ANTHROPIC LITERALLY JUST HANDING US THE BLUEPRINT🤯 Their new 33-page guide on Claude Skills is the cheat code. Make sure to bookmark this before it gets lost in your feed. Link in 🧵↓
m0h@exploraX_

x.com/i/article/2039…

English
43
297
2.6K
653K
Matthew Berman
Matthew Berman@TheMattBerman·
I make UGC with @openclaw for less than $5 each here's the system that pumps them out on demand: step 1: mine real language → openclaw scrapes Trustpilot, Reddit, Amazon reviews for your niche → pulls the exact words customers say step 2: build a creator → nano banana generates characters you reuse across videos → same person, same personality, 50 videos deep step 3: pick your format → yapper? podcast? hook and demo? wall of text? transformation? → each one has a shot by shot blueprint built in. agent picks the right one step 4: animate with @Kling_ai /Sora (rip) → feeds frame 1 into image-to-video → Sora for talking heads, Kling for b-roll and product shots → phone quality, not studio quality. that's the point step 5: nail the voice + captions → @elevenlabsio speech to speech + instant clone → real voice with room sound → ffmpeg overlays captions step 6: quality gate → gemini flash on @OpenRouter scores every clip. → only videos that pass the "real person" test make it out input: brand + product output: unlimited UGC video ads on demand $500 per UGC video → less than $5 I packaged the entire system as the ScrollClaw Kit. 6 @openclaw skills: - persona (customer research + creator profiles + script) - first-frame (nano banana → consistent AI creator face) - animate (sora/kling → talking head video) • b-roll (kling → product and environment shots) • assemble (elevenlabs voice + ffmpeg captions + post-production) • score (7-criteria AI virality gate) giving it away free. comment UGC + like + follow (must follow so i can DM)
English
135
6
121
6.7K
Man of Tao đã retweet
NVIDIA AI
NVIDIA AI@NVIDIAAI·
Here's your weekend project. Build a fully local, sandboxed AI assistant. Step-by-step tutorial to build your always-on agent: 🦞 on openclaw ✅ with NVIDIA NemoClaw ✨ using NVIDIA DGX Spark Get started: developer.nvidia.com/blog/build-a-s…
NVIDIA AI tweet media
English
48
199
1.4K
388.3K
Man of Tao đã retweet
Hanako
Hanako@hanakoxbt·
an ex-Citadel quant told me a salary is a short position on your own time we were at a dinner in new york. friend of a friend. he asked what i do. "software engineer. $4,200 a month" he said one simple thing "you're mass-selling the most valuable asset you have. your hours. at a fixed price. with no upside. that's a short position on your own life" i didn't have a response. then he showed me his phone. a terminal. live trades. +$39,453 running for 6 weeks. "i built this with Claude Opus 4.7 and one open source dataset. took a weekend. haven't touched it since" i asked how. "gave Opus one repo and one prompt. find who wins on prediction markets. find why. copy their edge" github.com/Jon-Becker/pre… 2,900 stars. 36GB. every trade on Polymarket and Kalshi. every wallet. every resolution. Opus read the whole dataset friday night. by sunday it had a live terminal scanning 1,400 markets per hour. 8 detectors running in parallel. entering when void score exceeds 90. exiting when it drops below 75. no human override. > Senate filibuster. locked at 42c. now 98c. +$1,133. > Neuralink approval. entered 52c. now 98c. +$1,396. > Fed 0% rate. entered 35c. now 97c. +$1,147. > BTC 250K. entered 8c. now 98c. +$1,258. > Megaquake. entered 6c. now 97c. +$1,039. the part that broke something in me: latency arbitrage. Polymarket updates in 19ms. Kalshi in 63ms. the bot trades the 44ms gap between them. automatically. every time. "we had four people at Citadel doing what your bot does with latency alone" copy mirror tracking 6 positions: > Senate fili +$1,252. void signal. 10%. > BTC 250K +$958. zero signal. 2%. > Neuralink +$1,384. sign signal. 9%. > Fed 0% +$863. fill signal. 8%. > Megaquake +$1,070. bridge signal. 18%. > META AI +$979. gas signal. 5%. 972 trades. 83% win rate. kelly f+ 0.101. avg profit $10.65. drawdown -1.4%. maker rebate tracker collecting $203 across four platforms while the bot trades. Polymarket +$114. Kalshi +$40. dYdX +$28. Betfair +$21. $1,800 seed. +$39,453. i quit the next morning. one email. "i'm done. thank you for everything." copytrade setup: @1743116" target="_blank" rel="nofollow noopener">kreo.app/@1743116 texted him a week later. "you ruined a perfectly stable career" he replied "no. i closed your short position"
Hanako@hanakoxbt

x.com/i/article/2042…

English
30
58
563
369.4K
Man of Tao đã retweet
Mike
Mike@MikeLongTerm·
$AMD Inference Queen vs Training King $NVDA 🧵 @AMD : $453.87B market cap P/S: 13x | Fwd P/S less than 6x P/E 106x | Fwd P/E 20x-25x @nvidia : $4.9 Trillion market cap P/S: 23x | Fwd P/S 18-20x P/E: 41x | Fwd P/E 24-28x It is not that NVDA cant grow, but when company hit this size or the top 10 mega cap, it would require so much buying inflow to move it, and it will. Where $AMD MC is tiny for its true potential. Just looks at how Fwd P/S and P/E are collapsing or cheaper to anyhting else on the market at this size and this growth. And yes AMD stock price will move slower when it gets to $4-$5 Trillion. It is the nature of boomers(money managers) making the call, and they like to overpay for $WMT and $COST(5-8% growth). Followers and Subscribers already know I been talking about EPYC and Agentic AI since 2023/2024. And the demand for Inference will shift the balance CPU:GPU from 1:4-8 to 10-20:1 by 2030, or Autonomous Agents where fleets of agents running 24/7 for enterprises. GPU was the King from 2023-2025, 2026 and beyond are the time for Inference Queen! Over the past decade, FLOPs required for leading models have grown >4x every year. This trend continues as models get smarter (larger, more capable reasoning). Training remains GPU-heavy and NVIDIA-dominant, but it’s “only” a 4x annual ramp. Inference (tokens processed): Exploded 100x in the last two years alone. This is the “always-on” phase, billions of daily users, embedded AI everywhere, and now agentic systems running continuously Inference has now reached (and in many cases surpassed) training as the dominant driver of new compute spend. Traditional single-shot LLM calls (one prompt → response) were GPU-bound and relatively contained. Agentic AI changes everything. 1. What Is Agentic AI and Why It Supercharges Inference Demand, and EPYC is the best CPU ~They plan & break tasks into steps). ~Reason (chain-of-thought, critique, iterate). ~Act (call tools/APIs, query databases, interact with external systems). ~Orchestrate (spawn sub-agents, coordinate multi-agent teams, loop until success). ~Examples: Autonomous research agents, multi-step enterprise workflows, self-improving code/debug agents, robotics control loops, or customer-support agents that handle full conversations + backend actions. The real explosive J-Curve: ~A traditional LLM query might use 50–500 tokens. ~An Agentic Workflow can use more than 50,000 tokens or 1,000 more, because each step involves fresh inference calls, tool results, re-planning, verification... => This is persistent, latency-sensitive, and 24/7 (not batchable like training). Real-world adoption data shows agentic traffic growing 7,851% YoY in 2025, with 79% of organizations already deploying agents and 96% planning expansion in 2026. Multi-agent orchestration grew 327% in just four months in some Fortune 500 deployments. 2. Why this shifts the balance Dramatically Toward CPU:GPU in the ratio of 1:1, and in just months to 3-5:1. It used to be 1:4-8 prior explosive Agentic demand since Feb 2026. By 2030, CPU:GPU ratio will change dramatically to 10:1 to 20:1. The shift in CPU:GPU ratio toward much higher CPU intensity ( 10:1 to 20:1 in terms of CPU cycles, cores, or sockets relative to GPUs in some interpretations) isn't about replacing GPUs, it's about the fundamental change in workload architecture that agentic AI introduces. Traditional LLM inference was mostly "one-shot" matrix math on GPUs. Agentic AI turns it into a complex, multi-step operating system-like workflow where the control plane (orchestration) dominates. Traditional LLM inference is ~90%+ GPU-bound matrix math (parallel token generation). Heavy orchestration, scheduling, data movement, tool calling, memory/database queries, control flow, and decision logic. GPUs excel at parallel math but are inefficient for irregular, low-latency, branchy, sequential control-plane work. CPUs become the bottleneck (or enabler): They keep GPUs fed/utilized, route data between agents/enterprise apps/data lakes, manage state, enforce policies, and handle the “results-focused” management layer. Current 5th-gen EPYC Turin (Zen 5, up to 192 cores/384 threads) already leads in many orchestration and mixed AI workloads vs. Intel Xeon 6 and even NVIDIA Grace, with strong perf/watt and TCO advantages. But Venice takes it to the next level and is purpose-built for exactly this agentic shift. Venice will power AMD's Helios rack (shown at CES 2026): each tray/node pairs one Venice CPU with four MI455X GPUs + Pensando/Vulcano networking, liquid-cooled for yotta-scale efficiency. This is explicitly designed for the training + massive inference demands Su described. Conclusion: At CES 2026, she stood on stage and delivered a clear, data-backed message: while training compute continues its impressive ~4x annual growth in FLOPs, inference has already exploded 100x more tokens processed in just the last two years marking a decisive inflection point. To support AI everywhere, from billions of users to autonomous agents solving complex real-world problems, the world will need another 100x increase in total AI compute capacity over the next 4–5 years, pushing beyond 10 yottaflops. She was right because the rise of agentic AI systems (since 2023/2024) that don’t just answer questions but plan, reason, use tools, iterate, coordinate with other agents, and act autonomously is fundamentally rewriting the economics and architecture of AI infrastructure. A single traditional query might consume hundreds of tokens on a GPU. A sophisticated agentic workflow can multiply that by 15x, 50x, or even 1,000x through repeated reasoning loops, tool calls, verifications, and multi-agent orchestration. The result is not just more inference volume, but a profound shift in workload character: bursty GPU math now sits inside a much larger, persistent, CPU-intensive control plane. This drives the dramatic rebalancing of CPU:GPU ratios. Traditional GPU-heavy clusters (often 1:4 to 1:8 CPU sockets to GPUs) are giving way to far more CPU-centric designs. Orchestration, tool processing, state management, data movement, and latency-sensitive decision logic can consume 50–90% of end-to-end latency in agentic flows. To keep expensive GPUs saturated and responsive, hyperscalers and enterprises are deploying significantly more high-core CPU capacity pushing effective ratios toward 5:1, 10:1, or even higher in CPU cycles and orchestration layers as fleets of autonomous agents run 24/7. And here is where AMD’s positioning shines. Dr. Su didn’t just call the trend, she unveiled the hardware blueprint to ride it: the Helios rack-scale platform, powered by Instinct MI455X GPUs for raw acceleration and the next-generation EPYC Venice (Zen 6) CPUs as the orchestration engine. With up to 256 cores on an advanced 2nm process, explosive memory bandwidth (1.6 TB/s raw or 2-3x memory bandwidth in optimized workload setup), revolutionary dual I/O dies, and massive PCIe scaling, Venice is purpose-built to feed GPUs efficiently while dominating the branchy, memory-intensive, high-concurrency work that agentic AI demands. Paired with Pensando networking and the open ROCm ecosystem, Helios delivers a balanced, energy-efficient, yotta-scale solution that hyperscalers and enterprises can actually deploy at volume. Dr. Su was right because she saw beyond the GPU headlines to the full-stack reality: the inference explosion is here, agentic AI is the multiplier, and balanced systems where powerful CPUs like EPYC Venice keep everything humming will determine who wins at scale. Training will keep growing steadily, but the real infrastructure buildout, the real spend, and the real opportunity in the coming years will be shaped by this new, CPU-augmented inference era. Not Financial Advice!
Mike@MikeLongTerm

$AMD NEEDS $TSM to hit $1,000 a share 🧵 Understanding 2nm Wafer Capacity ✍️ Context:Current EPYC CPU demand is 15-20m units, and is projected to double from H2 2026 to H1 2027. Meaning Supply is way behind from Demand, as companies are using entire year token budget in less than 30 days. Meaning, Agentic AI is consuming high CPU resources because it operates through continuous, multi-step Reasoning-Action-Observation loops rather than single-pass inference. TSMC's N2 (2nm) early phase production launched in late 2025 has already positioned the company for an accelerated ramp far beyond typical new-node timelines, directly supporting bullish outlook on scaling from ~25-35k to 60k+ wafers per month and unlocking millions of extra AMD EPYC Venice server CPUs. This is the first GAA/nanosheet node, with volume production kicking off on schedule (and with strong initial yields) at dedicated fabs in Taiwan. Combined output from the first lines started at ~35k-40k WPM (some early 2026 reads put it at 40k-60k aggregate). Yields opened strong at ~70% (with Taiwan sources reported of 75-80% overall and higher on certain layers) well above historical new-node averages. This "stable yield" is the key technical enabler for the post's "faster than standard" ramp, as it allows parallel equipment qualification and minimal downtime. 1. How many 2nm Fabs are Online and Planned TSMC's N2 production is centered in Taiwan (with U.S. contributions later). Fabs here are massive "Gigafab" sites with multiple phases (P1, P2..), each adding tens of thousands of WPM. Current(April 2026): We have 2 Primary Fabs for 2nm ~Fab 20 (Hsinchu Baoshan): Phase 1 online since late 2025. Initial ~20k-25k WPM contribution. ~Fab 22 (Kaohsiung): Phase 1 online (first to start volume production in Dec 2025); Phase 2 now in trial production / early ramp. As of now, 35k-60k WPM output is expected, and the are fully booked for 2026. TSMC aggressive Expansion which will help $AMD to service explosive Agentic AI demand Fab 20: P3 and P4 under construction (for N2 and below-2nm). Fab 22: P3 mostly complete, P4/P5 already breaking ground all five phases targeted fully operational by Q4 2027. Additional new sites: Three more in Tainan + expansions in central/northern parks ( Taichung, Chiayi). TSMC Target: Aggregate N2 capacity 120k-150k WPM (2026). This is a ~3-4x jump from early 2026 levels in roughly 9-12 months beating my own conservative "1+ year per major phase" typical timeline thanks to 70-80% yields and capex ($60B+ baseline for 2026, potentially higher). Arizona Fab 21 Phase 3/4 (N2-capable) targeted for equipment install in 2026 and production ramp 2027-2028 (initial ~20k WPM for N2 in some phases). TSMC has committed ~30% of future sub-2nm output to U.S. fabs long-term. => 2 fabs online, scaling 4 more fabs/phases toward end of 2026, and actively planning 10 fabs in the next 12-24 months. Timeline could go down with higher demand from AMD 3. The question is, how much TSMC can ramp up supply to support AMD significant demand for EPYC Venice in H2 2026? Venice is AMD's first HPC product on N2 (taped out April 2025, silicon validated with excellent perf/efficiency), using a chiplet design with N2 CCDs + 3nm I/O dies and advanced 3D SoIC packaging. Helios racks (AMD's new rack-scale AI platform) represent committed baseline volume( @OpenAI , $Meta, $MSFT, $ORCL, LumaAI...) , but TSMC's accelerated multi-fab expansion provides substantial headroom for standalone EPYC sales, hyperscaler direct purchases, and further Helios upside. According to TSMC CEO C.C. Wei, he expects "Faster ramp in 2026" due to higher yield and parallel fab buildouts on 2nm. 1m additional EPYC Venice would require 20-25k wafers(estimate). It is possible to ramp up 7-10m additional EPYC Venice to service Agentic AI demand or roughly 13-15k additional wafers per month from H2 2026 to H1 2027. This will enable AMD to: ~Fulfill hyperscaler Venice dense rack orders faster. ~Capture standalone server CPU share (vs. Intel). ~Support "Agentic AI" pricing power and multi-year data-center growth. Conclusion: TSMC’s accelerated N2 (2nm) ramp already delivering strong ~70-80% yields from the two primary Gigafabs (Fab 20 in Hsinchu and Fab 22 in Kaohsiung) that came online in late Q4 2025 makes the addition of 7-10 million extra EPYC Venice units for AMD from H2 2026 through H1 2027 not only technically feasible but a highly probable outcome of disciplined execution under explosive AI/HPC demand. Aggregate N2 output is scaling from the current ~35k-50k wafers per month (WPM) baseline to 120k-150k WPM by year-end 2026, with four+ effective plants/phases contributing and further phases (including Fab 20/22 expansions plus new Tainan sites) coming online in parallel. This delivers tens of thousands of incremental WPM precisely during the 12-month window in question compressing the “typical” 1+ year per major phase timeline thanks to stable yields and $52-56 billion in 2026 capex Each ~76 mm² Zen 6 CCD (12-core/48 MB L3 standard; up to 32-core Zen 6c dense variant) yields ~450-640 good dies per 300 mm wafer at current-to-mature N2 yields. With Venice’s chiplet design (typically 8 CCDs + I/O die + advanced 3D SoIC packaging), blended output across the full SKU mix lands in the realistic 40-60 CPUs per wafer range during ramp. Even conservatively allocating AMD 10-15%+ of incremental N2 capacity , the math easily supports multi-million extra units on top of the committed Helios rack baseline. TSMC isn’t just hitting its J-curve; it is breaking out of J-curve the historical new-node ramp curve because yields are strong, fabs are being stood up in parallel, and AI urgency is driving prioritization and capex flexibility. This supply tailwind directly validates and amplifies the multi-year bullish case for AMD turning 2026-2027 into a revenue and margin inflection point driven by Agentic AI server demand, pricing power, and share gains. Not Financial Advice!

English
7
14
186
68.7K
Man of Tao đã retweet
Shay Boloor
Shay Boloor@StockSavvyShay·
KEY LEVELS TO WATCH FOR POPULAR STOCKS • Bullish | $NVDA, $TSLA, $AMD, $MU, $AVGO, $NBIS, $ONDS, $AEHR, $CIFR, $RKLB, $IREN, $AAOI • Hold | $HOOD, $OKLO, $IONQ, $HIMS, $OPEN, $SOFI • Bearish | $PLTR, $ASTS
Shay Boloor tweet media
English
43
291
1.6K
138.5K
Man of Tao đã retweet
Cardio Arena
Cardio Arena@CadioArena·
5 exercises - full body in one workout to level-up your physique .
English
1
147
771
28.5K
Man of Tao đã retweet
Robert Scoble
Robert Scoble@Scobleizer·
Holy shit. Now everyone will be able to use their @OpenClaws and all the other agentic platforms to build apps on top of X. Here's the secret: build lists. Lists are how you build apps. The pattern: Build a list of your favorite football team. Or whatever you are into. Then ask your AI agents "build an app showing me all the important news about my favorite football team." In minutes you'll have an app. And that's just the beginning. Your agent can build a script about your favorite football team that you can take to places like Google's Notebook LM. Now you have a video, a podcast, a slide deck, a game, a mind map. All about your favorite football team based on real time news. You can do the same with something like @HeyGen, create an avatar of your favorite football player. Now you will have your favorite football player telling you everything that's happening on the football team. And I could go for hours about how many things you can build and not even cover a fraction of them. This is huge. Thank you @elonmusk for making it possible to make millions of agentic apps affordably on top of X. Start building!
Xclusiv@Xclusiv

@Scobleizer what's UP!! Did you have anything to do with the change to X API cost for get calls?? I got the following email on Thursday... Hello X API developers, We’re excited to announce an update to our X API pricing that makes accessing your own data more affordable than ever. Owned Reads are requests made by your own developer app for your own posts, bookmarks, followers, likes, lists & more. Starting Monday, April 20, 2026, these endpoints will be priced at $0.001 per request (equivalently, 1,000 resources for $1): GET /2/users/{id}/bookmarks GET /2/users/{id}/blocking GET /2/users/{id}/muting GET /2/users/{id}/pinned_lists GET /2/users/{id}/tweets GET /2/users/{id}/mentions GET /2/users/{id}/liked_tweets GET /2/users/{id}/followers GET /2/users/{id}/following GET /2/users/{id}/owned_lists GET /2/users/{id}/followed_lists GET /2/users/{id}/list_memberships This change significantly lowers the cost of common operations such as fetching your own posts, followers, likes, bookmarks, lists, and more. Additional updates effective Monday, April 20, 2026: Writes via X API will increase to $0.015 per post (from $0.01). This applies to the main posting endpoint: POST /2/tweets. Posting a URL via X API will be priced at $0.20 per post, except for summoned replies (which will remain at $0.01). Following, Likes, and Quote-Posts via API Writes will be removed for all self-serve tiers. This affects the following actions:POST /2/users/{id}/following (and DELETE for unfollow) POST /2/users/{id}/likes (and DELETE for unlike) Quote-posting via POST /2/tweets (when using the quote_tweet_id parameter) These adjustments reflect our ongoing commitment to supporting the developer community while ensuring sustainable platform operations and helping you build even better experiences on X. For full pricing details, including the complete rate card and updated documentation, visit the X API Pricing page.

English
365
646
5.4K
38.8M