Built Different Certified Elite

775 posts

Built Different Certified Elite banner
Built Different Certified Elite

Built Different Certified Elite

@KoncreteOshare

Love/$ is a Musk $TSLA

가입일 Kasım 2025
31 팔로잉32 팔로워
Josh “Pappy” Hazel
Josh “Pappy” Hazel@JEHazel75·
Had to take a break from @ChatCharge prep and go pick up this beauty! Got it home and immediately packed it to the gills with event materials ready to head out first thing in the morning! Fingers crossed no rock chips before we get it back and have PPF applied! This baby is going to look good in black and white photos! @chattybird0306
Josh “Pappy” Hazel tweet mediaJosh “Pappy” Hazel tweet mediaJosh “Pappy” Hazel tweet mediaJosh “Pappy” Hazel tweet media
English
41
10
159
7.7K
Amanda Hazel
Amanda Hazel@chattybird0306·
We pick up the new Model S Plaid tomorrow!! I’m so excited and can’t wait to take it on its first road trip to Chattanooga! @JEHazel75 @ChatCharge
Amanda Hazel tweet media
English
126
36
997
27.7K
Built Different Certified Elite 리트윗함
phil beisel
phil beisel@pbeisel·
Digital Optimus Digital Optimus could take several forms. At the very least, it will exist as software installed on a Mac or Windows machine. As such, it will have access to screen pixels (output) and be able to drive the keyboard and mouse (input). It could run in a stand-alone mode using your computer’s local inference compute. Alternatively, that compute could be offloaded to a Tesla AI4 processor (and later AI5). This offloading could occur in several ways, but most likely through a network connection to available AI4 compute. In a simple case, if your Tesla vehicle is parked in your garage, your home computer could connect over Wi-Fi to the vehicle’s compute— specifically its two AI4 chips. Because the model is network-based, Digital Optimus could just as easily connect to any available compute node. That might be another user’s vehicle in his garage or any parked Tesla added to the network, or a cluster of AI4 compute blocks located at a Supercharger site. This form of remote inference would allow Digital Optimus to seamlessly select the best available compute node. Tesla/xAI could also produce a dedicated compute device, similar in concept to the NVIDIA DGX Spark. If such a device were built, it would simply join the network as another inference node. When located locally, it could even connect directly to a PC to minimize latency. In effect, Elon’s idea implies that AI4 inference compute could function as a distributed network of compute devices. Users who own AI4 hardware (such as a Tesla vehicle) might even be compensated for contributing their compute capacity. Finally, this distributed inference model could be the most energy-efficient approach. By spreading workloads across many nodes, it distributes power consumption and delivers intelligence at the lowest cost per watt. In doing so, it breaks the centralized data-center power logjam.
Elon Musk@elonmusk

Oh and it works in all AI4-equipped cars, so your car can do office work for you when not driving. We’re also deploying millions of dedicated Digital Optimus units in the field at Superchargers where we have ~7 gigawatts of available power.

English
24
47
321
29.5K
Built Different Certified Elite 리트윗함
Herbert Ong
Herbert Ong@herbertong·
Woah! Take a look at what Tesla’s Principal Engineer for the CyberCab just said
Eric@EricETesla

@SecDuffy, thank you and your team for meeting with us this week at the AV Safety Forum. Innovative, American made AVs, like Cybercab, will vastly improve roadway safety and we look forward to working with you on launching them at scale this year.

English
38
88
1.5K
121.9K
Built Different Certified Elite 리트윗함
AleXandra Merz 🇺🇲
AleXandra Merz 🇺🇲@TeslaBoomerMama·
Holy smokes, another sign 🔥 ARKX (ARK Space & Defense Innovation ETF) bought for the first time $TSLA !!!!!!!!! I put in the comment their portfolio this morning, before purchasing for close to $15m $TSLA shares, making it instantly the #19 position out of 35 positions.
AleXandra Merz 🇺🇲 tweet media
English
72
131
1.3K
141.7K