phil beisel

28K posts

phil beisel banner
phil beisel

phil beisel

@pbeisel

x-Apple / x-Rivian (tech team founder) Disruption happens. Optimism ahead. 🚀🇺🇸 My Articles https://t.co/wJtooyvaoc

Silicon Valley Entrou em Ocak 2008
750 Seguindo26K Seguidores
DeplorableBryce
DeplorableBryce@Bryce57Hicks·
@pbeisel Very helpful explanation. Thank you. (once again). Phil is my go to source, highly skilled on the technical aspects AND understandable in his presentation.
English
2
0
5
212
phil beisel
phil beisel@pbeisel·
Elon says FSD 14.3 is coming. But if you’ve been following along, it was also “two weeks away” a few months ago. That’s drawn a lot of criticism, understandably. Let’s step back and talk about what’s actually going on: engineering reality. I’ve spent years running engineering teams at Apple and Rivian, and what you’re seeing here is not unusual. Not even a little. I’m not here to defend Elon or say communication couldn’t be better. It could. But what’s happening behind the scenes is far more ordinary than people think. First, understand what kind of company Tesla is. Tesla exposes more of its internal process than most companies— you’re watching how the sausage is made, often in real time. Compare that to Apple. Products appear at a moment in time, fully formed. What you don’t see are the features that slipped, were cut, or quietly postponed to make the deadline. Most companies communicate through layers of marketing at discrete events (e.g., NVIDIA GTC). That may include a CEO keynote—but it’s still tightly controlled. Tesla, largely via Elon, doesn’t. And that creates friction. Most people are used to being in the dining room. With Tesla, you’re watching the sausage get made whether you like it or not. If that makes you uncomfortable, this model will drive you crazy no matter how it’s explained. Now, about FSD 14.3— the so-called “reasoning” release. My view: when Elon originally referenced it, it was real. It was on a roadmap with a timeline. But then reality hit. Somewhere along the way, engineering discussions likely exposed a fork: ship what’s partially there, or go deeper and "do it right". That kind of shift happens constantly. Plans change. Timelines slip. This is normal engineering behavior, not dysfunction. The difference is: you’re seeing it. At companies like Apple, those decisions are invisible. Deadlines are protected by cutting scope. At Tesla, you’re watching the scope evolve in real time. On the technical side, 14.1 and 14.2 were already producing “reasoning tokens,” as Ashok (Tesla AI VP) noted. But producing tokens isn’t the same as using them effectively. 14.3 appears to be where those tokens actually start driving behavior, more human-like decision-making in edge cases. My guess is this is where things got more complicated. The work likely started to overlap with what xAI is doing. At that point, the question becomes: do you ship an interim solution, or integrate a more capable reasoning layer? That’s not a small decision. And it likely has downstream impact— potentially even on Robotaxi timelines— because these same reasoning challenges show up there too. So the team probably made a call: go deeper, even if it costs time. And here’s the part people underestimate: great engineering teams often convince themselves the extra work is worth it… and that it won’t take that much longer. They’re usually wrong on the timeline. But often right on the outcome. At this stage, FSD isn’t about raw safety (it seems to have nailed that)— it’s about behavior. Making decisions feel natural, human, predictable in edge cases. That’s a much harder problem. So if you’re following Tesla closely, the best thing you can do is understand the process and accept the messiness that comes with it. If you want tightly controlled messaging and polished delivery, companies like Apple exist for that. Tesla is something else entirely. Fire away.
Elon Musk@elonmusk

@DBurkland @pbeisel It’s in testing right now. Wide release in a few weeks.

English
77
87
766
50.1K
Bruno's
Bruno's@Brunos35·
Thanks for this Phil, also worth remembering Tesla is pushing into territory that has never really been attempted before. The safety layer of the software is largely solved, as you noted; now the challenge is getting it to behave with human-like judgment. It is somewhat comical that we as a group get disappointed with Elon continually missing timeline framing given the complexity of the product. The team is literally creating a product that will be able to drive better than any of us could, crazy to think about. There is no autonomy stack even close to FSD today, and the rate of improvement from here is likely to be exponential.
English
1
2
11
322
Quest4Grok
Quest4Grok@365_Solved·
Good info on Blackwell! I didn't realize they were full reticule. Full reticule chips are a PITA. Stitching can be insane. Yes, Raw yield doubles on 1/2 reticule, but more importantly, the finished per die yield is much higher due the reduced likelihood of imperfections wrecking the chip. Finished yields can be 1.5-3x more on raw 1/2 reticule dies
English
1
0
1
73
phil beisel
phil beisel@pbeisel·
Tesla’s forthcoming AI5 uses a half-reticle design, which is crucial for yield. A reticle defines the imaging area of a lithography machine, fitting two chips per shot effectively doubles yield. This means the Tesla chip design team had to carefully manage die features, for instance dropping the older ISP (and classic GPU) to make room for more AI cores. By contrast, NVIDIA’s Blackwell fills nearly a full reticle, making it a single-reticle design. If Tesla hits its compute and efficiency targets with AI5 in this half-reticle format, it’s almost like cutting fab requirements in half. And this has a big impact on Terafab, especially if it carries forward for AI6, AI7, etc.
phil beisel tweet media
phil beisel@pbeisel

Terafab may be the most essential vertical integration Tesla has ever undertaken— and it is truly non-optional. It will take years to build and will test even Elon’s speedrunning abilities to the limit, but that won’t stop him from trying. The breakthrough likely lies in overhauling the overall facility’s cleanroom model. By moving wafers in sealed pods with localized micro-environments, the fab no longer needs a monolithic ultra-clean space. Elon’s line about “eating cheeseburgers and smoking cigars” on the fab floor isn’t silly, it’s the practical reality of a radically simpler, cheaper, faster approach that could finally change the economics of chipmaking. This is all forced by the brutal “pinch” in chip supply. Tesla must produce on the order of 100–200 billion AI chips per year just to saturate its roadmap. That volume powers: FSD cars & Robotaxis (tens of millions of vehicles needing AI5 inference for near-perfect autonomy), Physical Optimus (scaling from thousands today to millions per year, each requiring AI5/AI6-level compute), Digital Optimus (the new xAI-Tesla software agents for digital/office automation, running massive inference clusters), Space-based data centers (AI7/Dojo3 orbital compute for GW-scale training and inference beyond Earth limits). AI5 delivers the ~10× leap for vehicles and early robots; AI6 shifts focus to Optimus + terrestrial DCs; AI7 goes orbital. No external foundry (TSMC, Samsung, etc.) can deliver that scale or timeline— hence the Terafab launch. Without it, the entire robotics + autonomy future hits a brick wall. Terafab isn’t optional; it’s the only way forward.

English
58
184
2.1K
339.3K
Luke James Stephens
Luke James Stephens@borisstephens·
Often right on the outcome but not on the timing
phil beisel@pbeisel

Elon says FSD 14.3 is coming. But if you’ve been following along, it was also “two weeks away” a few months ago. That’s drawn a lot of criticism, understandably. Let’s step back and talk about what’s actually going on: engineering reality. I’ve spent years running engineering teams at Apple and Rivian, and what you’re seeing here is not unusual. Not even a little. I’m not here to defend Elon or say communication couldn’t be better. It could. But what’s happening behind the scenes is far more ordinary than people think. First, understand what kind of company Tesla is. Tesla exposes more of its internal process than most companies— you’re watching how the sausage is made, often in real time. Compare that to Apple. Products appear at a moment in time, fully formed. What you don’t see are the features that slipped, were cut, or quietly postponed to make the deadline. Most companies communicate through layers of marketing at discrete events (e.g., NVIDIA GTC). That may include a CEO keynote—but it’s still tightly controlled. Tesla, largely via Elon, doesn’t. And that creates friction. Most people are used to being in the dining room. With Tesla, you’re watching the sausage get made whether you like it or not. If that makes you uncomfortable, this model will drive you crazy no matter how it’s explained. Now, about FSD 14.3— the so-called “reasoning” release. My view: when Elon originally referenced it, it was real. It was on a roadmap with a timeline. But then reality hit. Somewhere along the way, engineering discussions likely exposed a fork: ship what’s partially there, or go deeper and "do it right". That kind of shift happens constantly. Plans change. Timelines slip. This is normal engineering behavior, not dysfunction. The difference is: you’re seeing it. At companies like Apple, those decisions are invisible. Deadlines are protected by cutting scope. At Tesla, you’re watching the scope evolve in real time. On the technical side, 14.1 and 14.2 were already producing “reasoning tokens,” as Ashok (Tesla AI VP) noted. But producing tokens isn’t the same as using them effectively. 14.3 appears to be where those tokens actually start driving behavior, more human-like decision-making in edge cases. My guess is this is where things got more complicated. The work likely started to overlap with what xAI is doing. At that point, the question becomes: do you ship an interim solution, or integrate a more capable reasoning layer? That’s not a small decision. And it likely has downstream impact— potentially even on Robotaxi timelines— because these same reasoning challenges show up there too. So the team probably made a call: go deeper, even if it costs time. And here’s the part people underestimate: great engineering teams often convince themselves the extra work is worth it… and that it won’t take that much longer. They’re usually wrong on the timeline. But often right on the outcome. At this stage, FSD isn’t about raw safety (it seems to have nailed that)— it’s about behavior. Making decisions feel natural, human, predictable in edge cases. That’s a much harder problem. So if you’re following Tesla closely, the best thing you can do is understand the process and accept the messiness that comes with it. If you want tightly controlled messaging and polished delivery, companies like Apple exist for that. Tesla is something else entirely. Fire away.

English
1
0
5
515
Martin Easley
Martin Easley@Oregonguy40·
@pbeisel Elon seems to have a unique way of communicating probably one of the reasons Tsla trades at such a high pe ratio.
English
1
1
5
364
phil beisel
phil beisel@pbeisel·
@EBB1974 Some of what I am saying about 14.3 is from the post I made that Elon actually reposted. I am assuming (not necessarily true) that the repost indicates a nod that what I am saying is correct. x.com/pbeisel/status…
phil beisel@pbeisel

Digital Optimus, Optimus, and FSD What’s going on here? A lot. xAI and Tesla’s AI team have been working on complementary, partially overlapping AI systems. Tesla AI has focused on vision-based intelligence for both FSD and Optimus, while xAI has focused on building a frontier model (an LLM) aimed at general intelligence. More recently, xAI has pushed deeper into what has been called Macrohard (aka Digital Optimus). Macrohard applies xAI’s intelligence layer, Grok, to human activity in the digital world, essentially operating computers the way a human would. The idea is that the AI can move through existing software environments and perform tasks that previously required direct human interaction. But Macrohard goes beyond simply navigating pixels. It is also about generating outcomes within those environments— producing results (pixels), not just interacting with interfaces. In that sense, Macrohard becomes a quasi vision-based AI system as well. Elon, effectively the technology head of both companies, sees the convergence. The decision now appears to be to combine efforts and focus each team on its relative strengths. Tesla’s vision-based AI team becomes central in integrating this perception stack with xAI’s “pure” intelligence model. The benefits are substantial. First, xAI advances its Digital Optimus concept— an AI capable of driving productivity directly in the digital world. At the same time, Tesla gains a powerful intelligence backbone: a reasoning engine layered on top of its vision systems. For Optimus, the implications are colossal. The robot is no longer just a physical-world machine driven by perception and action loops; it becomes a reasoning system as well. In other words, Optimus gains both embodiment and intelligence. That combination directly addresses the data patterns I discussed in my article this week. And that is a big deal. Of course, the impact extends to FSD as well. Many of the “last mile” problems in vehicle autonomy involve nuanced human intent and interaction. A reasoning layer makes those scenarios far more tractable. Talking to your car and having it genuinely understand what you want it to do becomes realistic. Further, Elon has suggested that this combined approach fits within an AI4 inference framework—delivering more intelligence per watt and reducing the need to wait for AI5-scale hardware to solve these larger problems. All in all, this is significant news. It may shift some timelines, and I suspect it may also explain why version 14.3 (the rumored “reasoning edition”) has not yet appeared. It may now be part of this broader combined effort.

English
0
0
1
99
Eric Bown
Eric Bown@EBB1974·
Good post, and everything you say is rational, and could be what happened. However, I think there are a few other points to make. 1. Elon said at the time of 14.1 that 14.2 and 14.3 would follow in succession a few weeks apart from each other. I don’t think that he really meant that and nobody should have taken it seriously. I think he really meant to say after 14.1, 14.2 will be the next one up to be focused on, followed by 14.3. There is no way that he really thought that there wouldn’t be multiple bug fix releases in between each of those just as has been true for every other release of this type in the past. So nothing changed here. It was just his imprecise language, potentially inadvertently said, or maybe intentionally said, to put pressure on his team, but he wasn’t reporting to the market place a timeline that should have been taken seriously. 2. Outside of that timeline, he later said that what we think of as 14.3 was going to come later in Jan/Feb time frame. Again everyone takes him literally. What he really meant back then when he said that was, “we still have work to do and I’m guessing it will take about this long to happen”It wasn’t a promise as much as a prediction. Whenever he is predicting things to come that aren’t the NEXT thing to come, he is again using fuzzy math and shouldn’t be taken too seriously. 3. Ashok is much more deliberate about these things and what he said around the same time was that this stuff will come out in the first quarter of 26. I think he may have even said that before 14.2 came out, but I forget. To me, that is a much more meaningful prediction than what Elon says because Elon has a history of either deliberate or inadvertent imprecision. Ashok doesn’t. We are still in the first quarter of 26 btw. 4. It is entirely possible that none of the stuff that you mentioned as potentially happening actually happened, although I agree that that kind of stuff happens all the time and may be happening here. I just think that it is entirely possible that even as far back as October that the realistic 14.3 release window was in the ball park of what it is likely to become assuming that it really does come in a few weeks. 5. What does Elon’s latest statement mean and is it different contextually than when he said “a few weeks” back in October? Back then it was a prediction about future phases that they hadn’t reached yet. Today it’s confirmation of the current phase that they are in. That, coupled with Ashok’s past statements, makes me think that he really DOES mean a few weeks to this time. Most likely we are going to get a post from one or the other of them within the next two weeks saying something like “14.3 going live next weekend” and that is when we know that they are about to launch. We shall see. 6. All the handwringing that people do about Elon’s timelines is unfortunate, and of course his track record causes this. Just as Austin robotaxi’s happening last June, and 14 coming in September, and removing drivers by the end of the year all were a bit late, there was a lot of talk of last minute disasters or delays for all of them and they all happened eventually. So too shall 14.3 and then the chicken little will be on to wondering why 14 lite isn’t happening right away, and why aren’t they scaling faster, and the cycle continues.
English
4
1
6
782
Owen
Owen@WillyB_303·
@pbeisel @brandov great point. doing the hard things takes extreme intentionality
English
1
0
2
16
phil beisel
phil beisel@pbeisel·
Cramer asks "Why not do more with more?" Jensen: "Because you are out of imagination". BOOM! If productivity increases like magic, you take advantage of it. Winning companies will. Losers will think 'bottom line'.
Ricardo@Ric_RTP

Jensen Huang just called out every CEO who’s been firing people “because of AI.” Jim Cramer asked him why companies are laying people off if AI is supposed to make everyone MORE productive. Jensen's answer: "For companies with imagination, you will do more with more. For companies where the leadership is just out of ideas, they have nothing else to do. They have no reason to imagine greater than they are. When they have more capability, they don't do more." Read that again. The man who built the most important tech company on Earth just told you that if your CEO is using AI to cut headcount, it means one thing: They have no imagination. They have no vision for what comes next. They got handed the most powerful tool in human history and their FIRST instinct was to fire people. This is the CEO of NVIDIA. The company whose chips power every AI system on the planet. If anyone on Earth has the right to say "AI replaces workers," it's Jensen Huang. And he said the OPPOSITE. He said every carpenter could become an architect. Every plumber could become an architect. AI elevates capability. It doesn't eliminate it. But here's where it gets really interesting... During the same interview, Jensen revealed something nobody's talking about: He said AI startups like OpenAI and Anthropic are seeing their revenues increase by one to two billion dollars a WEEK. And he wishes these companies were public so the world could see what he sees. One to two billion per week. That's a $50 to $100 BILLION annualized run rate. For companies that most people think are burning cash and making nothing. The entire Wall Street narrative that "AI companies aren't profitable" might be completely wrong. Jensen sees their numbers. He sees their compute orders. He sees their growth. And he's saying the revenue is real. So if the money IS real, why are other companies firing people? Because they're not building AI products. They're not creating new revenue streams. They're not using AI to expand into new markets. They're using AI as an EXCUSE to cut costs because they ran out of ideas 3 years ago and need something to tell the board. Jensen's company added $500 billion in new orders in 5 months. He expects $1 trillion in cumulative revenue through 2027 from just two product lines. That number doesn't include the new chips, systems, or partnerships announced this week. And he's not cutting people. He's hiring. Because when you have imagination, more capability means MORE opportunity. Not less headcount. Meanwhile Salesforce cut thousands. Meta cut thousands. Amazon cut thousands. All blaming "AI efficiency." Jensen's response: You're out of imagination. He also said something that stuck with me. Cramer asked if he ever thought he'd build a $10 to $20 trillion company while waiting tables at Denny's. His answer: "I was just trying to make it through the shift." Biggest tip he ever got? Two, three dollars. Now he's building tech that increased computing demand by one million times in two years. He announced OpenClaw, which he says is as big as ChatGPT. And he's got 21 months of new business that isn't even counted in the trillion dollar figure yet. When asked how long he plans to keep working? "I'm hoping to die on the job. And I'm not hoping to die anytime soon." This is a man who believes every single thing he's building. And his message to every CEO using AI to justify layoffs is simple... You're not innovating. You're surrendering. The technology wasn't built to shrink companies. It was built to make them limitless. If your leadership can't see that, the problem isn't AI. It's THEM.

English
11
9
79
5.6K
phil beisel
phil beisel@pbeisel·
@jindoe168 Tesla didn’t magically have experience with anything before it jumped in.
English
0
0
10
196
jindoe168
jindoe168@jindoe168·
@pbeisel Is this even possible? With tsla has no experience in chip making?
English
1
0
2
236
phil beisel
phil beisel@pbeisel·
The Terafab "Yield Buffer": Why 160k is the Real Number Elon just clarified the math on Terafab, and the 60% jump in wafer starts (from 100k to 160k per month) tells a massive story about the reality of 2nm manufacturing. In my original breakdown, I estimated 100k wafers/month to hit 100 million AI5 chips/year. That assumes a relatively mature yield (60%+). Elon’s response, "Probably more like 160k wafers/month, factoring in yield", is a reality check. The "Bleeding Edge" Tax: Launching a 2nm fab from scratch is historically difficult. By aiming for 160k wafers, Tesla is building in a massive safety margin. If initial yields are lower (closer to 35-40%), they still hit the 100 million chip target. Monthly Starts: 160,000 wafers Annual Capacity: 1.92 Million wafers The Goal: 100 Million "Good" Chips Required Net Yield: ~35% (The "Launch" yield) The Upside: If yields hit 65%, output jumps to ~190 Million chips/year The TSMC Benchmark: Matching the Giant To put 160k wafers/month in perspective, look at TSMC. As of early 2026, TSMC’s entire global 2nm capacity (spread across multiple "Gigafabs" in Hsinchu and Kaohsiung) is targeting roughly 100k to 140k wafers per month. By pushing for 160k, Elon is essentially saying that a single Tesla Terafab cluster aims to outproduce the entire world’s initial 2nm supply.
phil beisel tweet media
Elon Musk@elonmusk

@pbeisel Probably more like 160k wafers/month, factoring in yield

English
9
55
346
17.5K
phil beisel
phil beisel@pbeisel·
@rufarogz Gigafactory sounded just about as insane at the time.
English
1
0
2
84
Rufaro
Rufaro@rufarogz·
Next level bold! 🙌🏾 Having said that, the Model 3/Y sales aspirations sounded pretty bold too when they were announced. Now we take them for granted. Taking on TSMC at that scale is a different higher level of bold though. TSMC has circa US$115 billion in property, plant & equipment not to mention its unique institutional expertise.
phil beisel@pbeisel

The Terafab "Yield Buffer": Why 160k is the Real Number Elon just clarified the math on Terafab, and the 60% jump in wafer starts (from 100k to 160k per month) tells a massive story about the reality of 2nm manufacturing. In my original breakdown, I estimated 100k wafers/month to hit 100 million AI5 chips/year. That assumes a relatively mature yield (60%+). Elon’s response, "Probably more like 160k wafers/month, factoring in yield", is a reality check. The "Bleeding Edge" Tax: Launching a 2nm fab from scratch is historically difficult. By aiming for 160k wafers, Tesla is building in a massive safety margin. If initial yields are lower (closer to 35-40%), they still hit the 100 million chip target. Monthly Starts: 160,000 wafers Annual Capacity: 1.92 Million wafers The Goal: 100 Million "Good" Chips Required Net Yield: ~35% (The "Launch" yield) The Upside: If yields hit 65%, output jumps to ~190 Million chips/year The TSMC Benchmark: Matching the Giant To put 160k wafers/month in perspective, look at TSMC. As of early 2026, TSMC’s entire global 2nm capacity (spread across multiple "Gigafabs" in Hsinchu and Kaohsiung) is targeting roughly 100k to 140k wafers per month. By pushing for 160k, Elon is essentially saying that a single Tesla Terafab cluster aims to outproduce the entire world’s initial 2nm supply.

English
2
0
2
412
Owen
Owen@WillyB_303·
@pbeisel @brandov a possible counter to the Block way. do i think it's a "sounds nice, but not realistic" answer? potentially. but i'd put the onus on us to be more imaginative.
English
1
0
0
46
phil beisel
phil beisel@pbeisel·
@vcn2027 This is true, but ask yourself why companies didn't do this before?
English
1
0
0
40
Tim Box
Tim Box@vcn2027·
@pbeisel It's a great opportunity to remove deadwood or the people hanging around have zero motivation to adopt new tools and paradigm. The new imaginative ideas can be done with new motivated members with more imaginative minds.
English
1
0
1
36
Nikos Nicolaou
Nikos Nicolaou@Aarchimandrita·
@pbeisel Any chance unsupervised is being stalled in order for the SpaceX merger to have greater chance of passing?
English
1
0
0
156
phil beisel
phil beisel@pbeisel·
FSD 14.3. It does exist and is in testing now. Guess: all quiet since 14.2.2.5 since they have been heads down on this release.
Elon Musk@elonmusk

@DBurkland @pbeisel It’s in testing right now. Wide release in a few weeks.

English
23
23
401
21.3K
Jeff Lutz 🔋
Jeff Lutz 🔋@thejefflutz·
This is so on 🎯 . Jensen Huang, $NVDA
Ricardo@Ric_RTP

Jensen Huang just called out every CEO who’s been firing people “because of AI.” Jim Cramer asked him why companies are laying people off if AI is supposed to make everyone MORE productive. Jensen's answer: "For companies with imagination, you will do more with more. For companies where the leadership is just out of ideas, they have nothing else to do. They have no reason to imagine greater than they are. When they have more capability, they don't do more." Read that again. The man who built the most important tech company on Earth just told you that if your CEO is using AI to cut headcount, it means one thing: They have no imagination. They have no vision for what comes next. They got handed the most powerful tool in human history and their FIRST instinct was to fire people. This is the CEO of NVIDIA. The company whose chips power every AI system on the planet. If anyone on Earth has the right to say "AI replaces workers," it's Jensen Huang. And he said the OPPOSITE. He said every carpenter could become an architect. Every plumber could become an architect. AI elevates capability. It doesn't eliminate it. But here's where it gets really interesting... During the same interview, Jensen revealed something nobody's talking about: He said AI startups like OpenAI and Anthropic are seeing their revenues increase by one to two billion dollars a WEEK. And he wishes these companies were public so the world could see what he sees. One to two billion per week. That's a $50 to $100 BILLION annualized run rate. For companies that most people think are burning cash and making nothing. The entire Wall Street narrative that "AI companies aren't profitable" might be completely wrong. Jensen sees their numbers. He sees their compute orders. He sees their growth. And he's saying the revenue is real. So if the money IS real, why are other companies firing people? Because they're not building AI products. They're not creating new revenue streams. They're not using AI to expand into new markets. They're using AI as an EXCUSE to cut costs because they ran out of ideas 3 years ago and need something to tell the board. Jensen's company added $500 billion in new orders in 5 months. He expects $1 trillion in cumulative revenue through 2027 from just two product lines. That number doesn't include the new chips, systems, or partnerships announced this week. And he's not cutting people. He's hiring. Because when you have imagination, more capability means MORE opportunity. Not less headcount. Meanwhile Salesforce cut thousands. Meta cut thousands. Amazon cut thousands. All blaming "AI efficiency." Jensen's response: You're out of imagination. He also said something that stuck with me. Cramer asked if he ever thought he'd build a $10 to $20 trillion company while waiting tables at Denny's. His answer: "I was just trying to make it through the shift." Biggest tip he ever got? Two, three dollars. Now he's building tech that increased computing demand by one million times in two years. He announced OpenClaw, which he says is as big as ChatGPT. And he's got 21 months of new business that isn't even counted in the trillion dollar figure yet. When asked how long he plans to keep working? "I'm hoping to die on the job. And I'm not hoping to die anytime soon." This is a man who believes every single thing he's building. And his message to every CEO using AI to justify layoffs is simple... You're not innovating. You're surrendering. The technology wasn't built to shrink companies. It was built to make them limitless. If your leadership can't see that, the problem isn't AI. It's THEM.

English
11
13
164
25.2K
phil beisel
phil beisel@pbeisel·
@Homebrewchef Tesla (and Elon) is a different animal. Its refreshing. The downside of course is many people are used to the filter.
English
1
0
8
289
Sean Z. Paxton
Sean Z. Paxton@Homebrewchef·
@pbeisel Fascinating perspective Phil. It’s unique to have a CEO who shares his vision, and a community who can interpret the vision, which when you compare it to industry leaders, is very ambitious. Yet, Tesla is so set in ways most other companies aren’t. The vertical integration
English
2
0
4
381