@elonmusk@wholemars@Scobleizer Put that power in a laptop and see what happens. We could literally stream anywhere. Love the advent of live on @x , but my mobile version is a bit wonky (but getting better).
Here is the real reason that Tesla is unreachable for the average auto manufacturer!
For the CEO/CTO & CHIEF ARCHITECT to have a deep understanding of the problem and the solution from power consumption to compute power to depth of inference is beyond comprehension.
Thanks @elonmusk for the livestream!
This is true about all legacy auto makers. That's part of what makes Tesla different. I'm long on @elonmusk and @Tesla . We own two Teslas and I've owned Tesla stock for years. Sharing customer / shareholder feedback. They can do with it what they want.
It's seems obvious:
Enabling existing customers to retro fit their cars will lead to happier more loyal customers. That should be enough but there's potentially more justification.
The consumer who buys a new car for +$100k doesn't seem to be a buy and hold consumer.
So what Tesla would be doing is enabling buyers who can afford a $40k - $60k used car a chance to get into a MS and become a champion for life.
Additionally, Tesla's growth trajectory seems to be on the growing migration from ICE to EV so that means plenty of net new buyers coming. Doesn't seem like much if any risk of cannibalizing new car sales
*btw that train of thinking (reduce customer happiness to sell more cars) is how legacy auto makers became legacy auto makers*
That is incredible. What's unfortunate is that @Tesla is not retro fitting certain model years with upgrades like mono cam to tri cam or bi pillar cams for 2016 and earlier MS. $130k car can't be upgraded. Core value proposition of Tesla is over air updates so the car gets better over time but if the hardware can't be upgraded it doesn't seem much better than legacy automakers.
An accurate assessment.
What is also mindblowing is that the inference compute power needed for 8 cameras running at 36FPS is only about 100W on the Tesla-designed AI computer. This puny amount of power is enough to achieve superhuman driving!
It makes a big difference that we run inference at int8, which is far more power-efficient than fp16. This requires us to do very difficult quantization-aware training at fp16 in order to infer at the lower resolution of int8.
But think about that for a minute: int8 only gives you a numerical range from 0 to 255 and yet the car can still understand the immense complexity of reality well enough to drive!
Same caveats here: reaching superhuman driving with AI requires billions of dollars per year of training compute and data storage, as well as a vast number of miles driven.
Tesla also has over 4 million cars on the road capable of training the AI. In a few years, we will have roughly 10 million.
Our world changed tonight.
In 10 years we will look back at the first public demo of a robot that learned to move around the world by watching only videos.
This is a paradigm shift in how software is built.
At one point @elonmusk took over because the AI made a mistake.
He said the fix is to feed it more videos.
Multi modal AIs are here. At full scale.
This speeds up the humanoid robot for me. Imagine you showing your robot how to make grandmas recipe. And from then on it can make it every night if you want.
Cameras just had a paradigm shift.
Every recipient of a resting #beagle chin will be filled with a sensation of warm contentment within 11.4 minutes. #beaglefacts
📷 @CoachRobynR / Twitter
In the life of a bee, mistakes can be costly. So, with a brain the size of a sesame seed, how do they make such quick and accurate decisions? elifesciences.org/digests/86176/…
A kinase that controls systemic growth in Drosophila can be activated by ribosomal and transfer RNAs released by gut microbiota. elifesciences.org/articles/76584…
Thanks to the HDBI for organising another great consortium meeting at Cambridge this week. Great presentations, great conversations and a great location