Post

phil beisel
phil beisel@pbeisel·
Tesla’s forthcoming AI5 uses a half-reticle design, which is crucial for yield. A reticle defines the imaging area of a lithography machine, fitting two chips per shot effectively doubles yield. This means the Tesla chip design team had to carefully manage die features, for instance dropping the older ISP (and classic GPU) to make room for more AI cores. By contrast, NVIDIA’s Blackwell fills nearly a full reticle, making it a single-reticle design. If Tesla hits its compute and efficiency targets with AI5 in this half-reticle format, it’s almost like cutting fab requirements in half. And this has a big impact on Terafab, especially if it carries forward for AI6, AI7, etc.
phil beisel tweet media
phil beisel@pbeisel

Terafab may be the most essential vertical integration Tesla has ever undertaken— and it is truly non-optional. It will take years to build and will test even Elon’s speedrunning abilities to the limit, but that won’t stop him from trying. The breakthrough likely lies in overhauling the overall facility’s cleanroom model. By moving wafers in sealed pods with localized micro-environments, the fab no longer needs a monolithic ultra-clean space. Elon’s line about “eating cheeseburgers and smoking cigars” on the fab floor isn’t silly, it’s the practical reality of a radically simpler, cheaper, faster approach that could finally change the economics of chipmaking. This is all forced by the brutal “pinch” in chip supply. Tesla must produce on the order of 100–200 billion AI chips per year just to saturate its roadmap. That volume powers: FSD cars & Robotaxis (tens of millions of vehicles needing AI5 inference for near-perfect autonomy), Physical Optimus (scaling from thousands today to millions per year, each requiring AI5/AI6-level compute), Digital Optimus (the new xAI-Tesla software agents for digital/office automation, running massive inference clusters), Space-based data centers (AI7/Dojo3 orbital compute for GW-scale training and inference beyond Earth limits). AI5 delivers the ~10× leap for vehicles and early robots; AI6 shifts focus to Optimus + terrestrial DCs; AI7 goes orbital. No external foundry (TSMC, Samsung, etc.) can deliver that scale or timeline— hence the Terafab launch. Without it, the entire robotics + autonomy future hits a brick wall. Terafab isn’t optional; it’s the only way forward.

English
58
179
2.1K
334.9K
Elon Musk
Elon Musk@elonmusk·
AI5 will punch far above its weight, because the entire Tesla AI software stack is designed to make maximally effective use of every circuit. We co-signed our AI software and hardware. Bear in mind that AI5, while it can be used for training in data centers, is primarily optimized for AI edge compute in Optimus and Robotaxi. There is still significant room for improvement. In the same half reticle and same process node, we think a single AI6 chip has the potential to match a dual SoC AI5.
English
264
500
5.9K
347.4K
Traube Nuss
Traube Nuss@rumnusstraube·
@WorldlyReviewer @elonmusk @farzyness @pbeisel Driver present (but not paying attention) yes. Otherwise, maybe. There are just too many edge cases across millions of vehicles everywhere to let a car drive without someone around to handle the edge case.
English
1
0
0
121
Traube Nuss
Traube Nuss@rumnusstraube·
@WorldlyReviewer @elonmusk @farzyness @pbeisel Fleet, yes. Geo fenced. Kept in the depot if needed. Operators online and in the field for exceptions. City first responders trained. Clear liability. Also, more auxiliary hardware than consumer cars. Washers on cameras, extra communications, etc.
English
1
0
0
37
Traube Nuss
Traube Nuss@rumnusstraube·
@WorldlyReviewer @elonmusk @farzyness @pbeisel Driver present greatly simplifies all the issues above. Drivers can handle those exceptions. Large cities won't let cars drive on their own with nobody to call when there is an issue.
English
1
0
0
31
Compartilhar