Ryan Wang 🇹🇼

6K posts

Ryan Wang 🇹🇼 banner
Ryan Wang 🇹🇼

Ryan Wang 🇹🇼

@ryanwang

An individual Investor from Taiwan who focus on Tech, AI, autonomy, and robotics.

Taipei, Taiwan 가입일 Mayıs 2018
1.6K 팔로잉1.2K 팔로워
Ryan Wang 🇹🇼
Ryan Wang 🇹🇼@ryanwang·
Take your eyes off Tesla's stock price for a moment and let's look at the so-called "Tesla killers." Xiaomi's share price has been cut in half since its peak last summer. Since the second half of last year, I've repeatedly pointed out that Xiaomi's car business in China has already begun facing significant public trust issues. While its traditional smartphone business is under pressure, the auto business was originally expected to be a bright spot for Xiaomi. However, its development has recently started to encounter real challenges as well.
Ryan Wang 🇹🇼 tweet media
Ryan Wang 🇹🇼@ryanwang

Robert is right! Given the frequent road accidents involving the Xiaomi SU7 model (mockingly referred to in China as the "road greening belt killer") and issues with false advertising, Xiaomi is currently facing an increasingly severe trust crisis on the Chinese internet.

English
0
0
1
64
Ryan Wang 🇹🇼 리트윗함
Tesla Europe, Middle East & Africa
De toekomst van mobiliteit is aangebroken FSD Supervised has been approved in the Netherlands 🇳🇱 & will begin rolling out in the country shortly!  Trained on billions of kilometers of real-world driving data, it can drive you almost anywhere under your supervision – from residential roads to city streets & highways No other vehicle can do this.  We're excited to bring FSD Supervised to more European countries soon
English
759
1.9K
8.3K
6.8M
Ryan Wang 🇹🇼 리트윗함
tony
tony@tonyhua64243679·
tony tweet media
Uraidla - Summertown, South Australia 🇦🇺 ZXX
3
0
2
766
Pugla | 🇯🇵 FSDはよ!
To all EU Tesla owners refreshing your feeds for the RDW approval today 🇪🇺🇳🇱 You are not the only ones holding your breath! 🇯🇵 Did you know? Japan's FSD release is completely dependent on your RDW approval. Since Japan and the EU share the UN-R171 (WP.29) framework, the moment you get the green light, the final regulatory domino falls for Japan too! 🤝 Actually, our Japanese RHD HW4 vehicles just secretly downloaded FSD v14.2.2.5 in the background yesterday. We are literally just waiting for YOUR unlock key! 🔑 Let's make history today. Sending massive support from Japan! ⚡️ #Tesla #FSD #RDW
English
41
52
704
42.9K
Ryan Wang 🇹🇼 리트윗함
jimmah
jimmah@jamesdouma·
@PeymanAbedirad @OG_Yogi @ICannot_Enough @TheCTStud @DavidMoss Yes. It's the first FSD version that adds reasoning. Reasoning can address issues that are otherwise very hard to solve. FSD is already superhuman in it's perception and reflexes. Reasoning is a way to give it superhuman judgement as well.
English
17
46
647
53.9K
New York Doc
New York Doc@YorkDoc·
@tavi_chocochip @ryanwang Things evolve in AI front so fast… you can’t blame him . 6 months ago things were different. Look at how claude models are improving
English
1
0
0
54
Ryan Wang 🇹🇼
Ryan Wang 🇹🇼@ryanwang·
The release of FSD v14.3 and discussions from insiders have given me a clearer picture of why Tesla is not rushing to deploy a 10x larger parameter model with version 14. Under the hardware constraints of existing HW4, Tesla’s engineering team must make a trade-off between MPI (Miles Per Incident, a safety metric) and inference latency. By thoroughly overhauling the underlying technical architecture—including a full MLIR rewrite of the compiler and runtime, along with upgrades to RL training—they are attempting to shift the entire “autonomous driving efficiency curve” outward, rather than simply moving along the existing curve. Increasing model size (Large Model) typically improves MPI (theoretically making the system safer with fewer incidents). However, on fixed hardware, a larger model often increases inference latency (slower response). If latency rises too much, even if MPI gets better, overall real-world safety may actually decline—because delayed reactions can allow small errors to compound into serious problems. Simply forcing a Large Model would likely push the curve toward “higher MPI but significantly higher latency,” which in practice would be a poor trade-off. The essence of v14.3 is to achieve an “outward shift of the curve.” It is not just about making the model bigger or smaller. Instead, through the MLIR reconstruction of the entire compiler and runtime, Tesla enables the same HW4 hardware to deliver higher performance at lower latency. At the same time, they use RL to optimize for hard examples, improving MPI without a noticeable increase in latency. This effectively pushes the entire “MPI–Latency efficiency curve” up and to the right—achieving a better trade-off.
Elon Musk@elonmusk

@Chansoo Our rate of advancement with the small model has been so fast that the large model has not yet caught up. V15 will be the large model.

English
43
54
615
75.3K
Ryan Wang 🇹🇼
Ryan Wang 🇹🇼@ryanwang·
@DaySamual39307 No, they're optimizing FSD with a much better underlying architecture. It's like replacing your kitchen's old wood-burning stove with a microwave.
English
1
0
1
121
Reality is REAL
Reality is REAL@DaySamual39307·
@ryanwang Is it fair to say they are trying to slim down the software so it can operate within a brain that’s too small? Metaphorically speaking
English
1
0
0
133
Mike P
Mike P@mikepat711·
When Trump’s term is over someone should make a coffee table book of all of his Truth Social bangers
Mike P tweet media
English
10
2
55
1.4K
Jan Woldstad
Jan Woldstad@JanWoldstad·
@ryanwang "insiders" - so the rest of us is relegated to read posts on X.
English
1
0
1
509
Ryan Wang 🇹🇼 리트윗함
Cern Basher
Cern Basher@CernBasher·
Tesla FSD v14.3: The Removal of a Bottleneck Most people looking at FSD v14.3 see a familiar story: incremental improvement. A bit faster, a bit smoother, a bit more refined. The headline number - roughly 20% faster reaction time - sounds like a solid upgrade, but nothing revolutionary. That interpretation misses the point entirely. v14.3 is not about improving the model. It’s about replacing the system underneath the model. To understand why this matters, you have to separate two parts of Tesla’s AI stack. First, there is the training environment. This is where Tesla uses massive compute clusters to build increasingly powerful neural networks. In this environment, the models can be as large and as sophisticated as Tesla wants. Second, there is the runtime environment inside the car. This is where those models actually have to operate - in real time, under strict constraints of compute, memory, and latency. Historically, the gap between these two worlds has been a major constraint. Tesla could train a highly capable model on the server side, but when it came time to deploy that model into the vehicle, compromises were unavoidable. The model had to be compressed, simplified, and optimized to fit within the limitations of the vehicle hardware. In the process, some of its capability was inevitably lost. The result was not a lack of intelligence, but a bottleneck in how that intelligence was delivered. With v14.3, Tesla rebuilt both the compiler and the runtime from the ground up using MLIR (Multi-Level Intermediate Representation). The compiler is responsible for taking a trained model and translating it into a form that can run efficiently on the vehicle. The runtime is responsible for executing that model in real time inside the car. By rewriting both layers, Tesla has fundamentally improved how models are converted and how they are executed. This is why the improvements show up not just in raw speed, but in qualitative behavior. Early testers are reporting smoother responses, more natural decisions, and a noticeable increase in responsiveness. These are not just signs of a better model - they are signs of a better system delivering that model. For the past several versions - v12 through v14 - progress was largely driven by improving the model itself. But the underlying inference framework remained largely the same. That meant progress was increasingly constrained. Even as the model improved, the system responsible for running it became the limiting factor. So, v14.3 marks a shift in approach. Instead of continuing to push only on model performance, Tesla upgraded the entire stack. The focus is no longer just on how smart the model is, but on how efficiently that intelligence can be translated and executed in the real world. Elon Musk has referred to this kind of change as a “final piece of the puzzle.” That phrasing can be misleading if interpreted as an endpoint. In reality, this is a reset. By replacing the underlying system, Tesla has removed a key constraint that was limiting future progress. The implication is not that FSD is complete, but that future versions - v15, v16, and beyond - can advance much more rapidly and with fewer compromises. In practical terms, this means larger, more capable models can be deployed more effectively. It means improvements made in training are more likely to carry through to real-world performance in the vehicle. And it means iteration cycles can accelerate. One of the more underappreciated aspects of this change is its potential impact on existing vehicles, particularly those running HW3. The new MLIR-based system is designed to take better advantage of available hardware through techniques like quantization, operator fusion, and heterogeneous optimization. In simple terms, it allows Tesla to extract more performance from the same physical chips. A potential “v14 Lite” for HW3 vehicles: With a more efficient runtime, older hardware may be able to run more advanced capabilities than previously thought possible. So, the real story here is that Tesla has addressed a structural limitation in its AI system. It has improved the way intelligence is packaged, delivered, and executed. This is not just an upgrade. It is the removal of a bottleneck. v14.3 should not be viewed as the culmination of Tesla’s FSD efforts. The visible changes today may seem incremental. The invisible changes beneath them are anything but. Tesla did not just make the system faster. It made it ready for what comes next.
Elon Musk@elonmusk

Tesla V14.3 self-driving review. The point releases will bring polish. V15 will far exceed human levels of safety, even in completely unsupervised and complex situations.

English
139
227
1.5K
149.1K
Sid San
Sid San@drsidsan·
@tavi_chocochip @ryanwang Average users may not appreciate the difference of the RL rewrite yet, but internally it must have been a huge task and a new foundation to build on. The full weight of inference is yet to show up in this stack
English
1
0
1
65
Ryan Wang 🇹🇼
Ryan Wang 🇹🇼@ryanwang·
@tavi_chocochip It’s like you’re putting together a massive jigsaw puzzle. The difference in one key piece determines whether you can smoothly complete the rest of the puzzle afterward.
English
4
1
15
1.3K
𝕋𝕒𝕧𝕚
𝕋𝕒𝕧𝕚@tavi_chocochip·
@ryanwang If the situation is as described, obv. the Tesla AI engineers were fully aware of it all this time; the alternative is they're incompetent. So why did Elon keep pushing the narrative of the 10x parameter model (incl. reasoning, "last piece of the puzzle") in 14.3 for months?
English
4
0
11
1.6K
Jay Kess
Jay Kess@JayBarlowBot·
@ryanwang Did you speak with Tesla engineers or this is a discussion you found online?
English
1
0
0
207
ck
ck@OptimusUpRyan69·
@ryanwang So their MO for a while has been safety first and then tuning for comfort in point releases. That has been a balancing act and I wonder if this rewrite changes that rhythm.
English
1
0
5
1.1K