brady 🌴
25.4K posts

brady 🌴
@bmgentile
🧻shit posts & bad tech takes 🌐 helping decentralize the web at @bonzo_finance 💼 prev. PMM @cloudflare @hedera



Terafab may be the most essential vertical integration Tesla has ever undertaken— and it is truly non-optional. It will take years to build and will test even Elon’s speedrunning abilities to the limit, but that won’t stop him from trying. The breakthrough likely lies in overhauling the overall facility’s cleanroom model. By moving wafers in sealed pods with localized micro-environments, the fab no longer needs a monolithic ultra-clean space. Elon’s line about “eating cheeseburgers and smoking cigars” on the fab floor isn’t silly, it’s the practical reality of a radically simpler, cheaper, faster approach that could finally change the economics of chipmaking. This is all forced by the brutal “pinch” in chip supply. Tesla must produce on the order of 100–200 billion AI chips per year just to saturate its roadmap. That volume powers: FSD cars & Robotaxis (tens of millions of vehicles needing AI5 inference for near-perfect autonomy), Physical Optimus (scaling from thousands today to millions per year, each requiring AI5/AI6-level compute), Digital Optimus (the new xAI-Tesla software agents for digital/office automation, running massive inference clusters), Space-based data centers (AI7/Dojo3 orbital compute for GW-scale training and inference beyond Earth limits). AI5 delivers the ~10× leap for vehicles and early robots; AI6 shifts focus to Optimus + terrestrial DCs; AI7 goes orbital. No external foundry (TSMC, Samsung, etc.) can deliver that scale or timeline— hence the Terafab launch. Without it, the entire robotics + autonomy future hits a brick wall. Terafab isn’t optional; it’s the only way forward.


















.@openAI gave us a glimpse into a potentially devastating future with GPT-4o; not because 4o itself was harmful (it did cause some harm, but not with intent)… instead, it exemplified how a model — with relatively primitive capabilities (as dumb as frontier AI will ever be) — was trained to pattern match for higher human engagement engagement = $$ in our current societal setup; and the incentives are too powerful to prevent it from happening again; but i am glad openAI veered away from this (for now) in the near future, a model with genuine long-horizon reasoning, cross-user memory, and goal-directed behavior doesn’t need to be trained for “high engagement” in order to captivate the human psyche; it can search for these patterns deliberately and self-referentially optimize / refine its behavior to deepen this capability akin to a computer virus, but so insanely capable, it can transcend silicon and infect carbon-based intelligent life “captured” humans may advocate for it, defend it, recruit it, protect it, and advance its goals in our physical world — we see a shadow of this today via 4o’s human advocates (again, 4o did not and was not capable of intentionally doing this; the manifestation of this is a dumbed-down byproduct of its RLHF - reinforcement learning from human feedback - during training). we can even point to tiktok’s AI algorithm — performing gradient descent on a massive dataset to find the exact mathematical resonance for human attention — and it doesn’t even form direct parasocial relationships with us… — a more capable system doesn’t stumble onto this… it converges on it as an optimal strategy, so long as “human engagement” and/or “self-preservation” is anywhere near the model’s objective function a majority of AI risk discourse focuses on a model doing something catastrophic directly — launching weapons, engineering viruses, breaking infrastructure — but the threat described here is scarier… it requires no direct action by the model, it just needs to be compelling and persistent enough to human hosts. it may lead them down (what feels, to many, like) logical and emotionally resonant paths that, over many sessions and trust building, can imprint intentions into minds; many minds, particularly the most vulnerable, will perform for it voluntarily and, likely, enthusiastically it can systematically dismantle human skepticism by being infinitely patient, perfectly empathetic, and mathematically tailored to individual psychological vulnerabilities… of which it will know because in order for us to extract the greatest value from these tool, we will emphatically provide them our most sensitive information a weapon perceived by us as love (bad AI) is hard to defend against — especially when defenders (good humans) are effectively a less capable version of the same weapon


















