phil beisel
28K posts

phil beisel
@pbeisel
x-Apple / x-Rivian (tech team founder) Disruption happens. Optimism ahead. 🚀🇺🇸 My Articles https://t.co/wJtooyvaoc


Digital Optimus I dig into this with @pbeisel - a 2-hour discussion...


@DBurkland @pbeisel It’s in testing right now. Wide release in a few weeks.




Terafab may be the most essential vertical integration Tesla has ever undertaken— and it is truly non-optional. It will take years to build and will test even Elon’s speedrunning abilities to the limit, but that won’t stop him from trying. The breakthrough likely lies in overhauling the overall facility’s cleanroom model. By moving wafers in sealed pods with localized micro-environments, the fab no longer needs a monolithic ultra-clean space. Elon’s line about “eating cheeseburgers and smoking cigars” on the fab floor isn’t silly, it’s the practical reality of a radically simpler, cheaper, faster approach that could finally change the economics of chipmaking. This is all forced by the brutal “pinch” in chip supply. Tesla must produce on the order of 100–200 billion AI chips per year just to saturate its roadmap. That volume powers: FSD cars & Robotaxis (tens of millions of vehicles needing AI5 inference for near-perfect autonomy), Physical Optimus (scaling from thousands today to millions per year, each requiring AI5/AI6-level compute), Digital Optimus (the new xAI-Tesla software agents for digital/office automation, running massive inference clusters), Space-based data centers (AI7/Dojo3 orbital compute for GW-scale training and inference beyond Earth limits). AI5 delivers the ~10× leap for vehicles and early robots; AI6 shifts focus to Optimus + terrestrial DCs; AI7 goes orbital. No external foundry (TSMC, Samsung, etc.) can deliver that scale or timeline— hence the Terafab launch. Without it, the entire robotics + autonomy future hits a brick wall. Terafab isn’t optional; it’s the only way forward.



Digital Optimus, Optimus, and FSD What’s going on here? A lot. xAI and Tesla’s AI team have been working on complementary, partially overlapping AI systems. Tesla AI has focused on vision-based intelligence for both FSD and Optimus, while xAI has focused on building a frontier model (an LLM) aimed at general intelligence. More recently, xAI has pushed deeper into what has been called Macrohard (aka Digital Optimus). Macrohard applies xAI’s intelligence layer, Grok, to human activity in the digital world, essentially operating computers the way a human would. The idea is that the AI can move through existing software environments and perform tasks that previously required direct human interaction. But Macrohard goes beyond simply navigating pixels. It is also about generating outcomes within those environments— producing results (pixels), not just interacting with interfaces. In that sense, Macrohard becomes a quasi vision-based AI system as well. Elon, effectively the technology head of both companies, sees the convergence. The decision now appears to be to combine efforts and focus each team on its relative strengths. Tesla’s vision-based AI team becomes central in integrating this perception stack with xAI’s “pure” intelligence model. The benefits are substantial. First, xAI advances its Digital Optimus concept— an AI capable of driving productivity directly in the digital world. At the same time, Tesla gains a powerful intelligence backbone: a reasoning engine layered on top of its vision systems. For Optimus, the implications are colossal. The robot is no longer just a physical-world machine driven by perception and action loops; it becomes a reasoning system as well. In other words, Optimus gains both embodiment and intelligence. That combination directly addresses the data patterns I discussed in my article this week. And that is a big deal. Of course, the impact extends to FSD as well. Many of the “last mile” problems in vehicle autonomy involve nuanced human intent and interaction. A reasoning layer makes those scenarios far more tractable. Talking to your car and having it genuinely understand what you want it to do becomes realistic. Further, Elon has suggested that this combined approach fits within an AI4 inference framework—delivering more intelligence per watt and reducing the need to wait for AI5-scale hardware to solve these larger problems. All in all, this is significant news. It may shift some timelines, and I suspect it may also explain why version 14.3 (the rumored “reasoning edition”) has not yet appeared. It may now be part of this broader combined effort.






@pbeisel Probably more like 160k wafers/month, factoring in yield

The Terafab "Yield Buffer": Why 160k is the Real Number Elon just clarified the math on Terafab, and the 60% jump in wafer starts (from 100k to 160k per month) tells a massive story about the reality of 2nm manufacturing. In my original breakdown, I estimated 100k wafers/month to hit 100 million AI5 chips/year. That assumes a relatively mature yield (60%+). Elon’s response, "Probably more like 160k wafers/month, factoring in yield", is a reality check. The "Bleeding Edge" Tax: Launching a 2nm fab from scratch is historically difficult. By aiming for 160k wafers, Tesla is building in a massive safety margin. If initial yields are lower (closer to 35-40%), they still hit the 100 million chip target. Monthly Starts: 160,000 wafers Annual Capacity: 1.92 Million wafers The Goal: 100 Million "Good" Chips Required Net Yield: ~35% (The "Launch" yield) The Upside: If yields hit 65%, output jumps to ~190 Million chips/year The TSMC Benchmark: Matching the Giant To put 160k wafers/month in perspective, look at TSMC. As of early 2026, TSMC’s entire global 2nm capacity (spread across multiple "Gigafabs" in Hsinchu and Kaohsiung) is targeting roughly 100k to 140k wafers per month. By pushing for 160k, Elon is essentially saying that a single Tesla Terafab cluster aims to outproduce the entire world’s initial 2nm supply.




@DBurkland @pbeisel It’s in testing right now. Wide release in a few weeks.

This is so on 🎯 . Jensen Huang, $NVDA

This is so on 🎯 . Jensen Huang, $NVDA





