카피바라
1.9K posts

카피바라
@CapybaraTrap
테슬라와 조비를 응원하고 내집 마련이 꿈인 곧포티이자 골든스테이트 워리어스 광팬



“딸은 비행기 태워 준다더니 맞네” 부모님이 해외여행 가실 때마다 패키지로 가는게 마음에 걸려서 조금이라도 젊으실 때 멀리 가보자 해서 ‘이탈리아’로 모시고 갔어요. 역사에 관심이 많으신 아빠는 ‘콜로세움’을 보고 감탄을 하셨고 자연을 좋아하는 엄마는 ‘카프리 섬’에 반하시고 ‘포지타노’에서 맘껏 수영하셨어요. 부모님이 오래 돌아다니면 힘들어 하셔서 저녁에는 숙소에서 시간을 보냈는데요. 간단하게 와인 마시면서 하신 말씀. “정말 고맙다. 딸 아니면 언제 이런 곳 와볼 수나 있겠니. 딸 있으면 비행기 태워 준다더니 그말이 딱이다!” 한국와서 용돈 두둑하게 주심😎

댓글 많이 남기셔라.. 댓글만으로 금방 정직원 될 수 있다

NEWS: Tesla is asking a Colorado federal court to throw out a suit alleging that a defect in its Autopilot technology led to a fatal car crash, saying the evidence now shows that Autopilot was not on, and that the driver was drunk well beyond the legal limit. law360.com/productliabili…

🚨BREAKING: $TSLA ROBOTAXI FLEET SURPASSES 475 VEHICLES • Robotaxi tracker data shows total of 475 units publicly captured • Actual vehicles in active operation + testing already exceeds 500 (including non-public / internal fleet) ✅ Breakdown by Location • Bay Area: 383 vehicles • Austin: 92 vehicles • Unsupervised (Austin): 8 vehicles

Ashok Elluswamy, Tesla's AI lead, during a GTC discussion, highlighting the fundamental similarity in AI approaches for self-driving cars and humanoid robots: - Hierarchical decision making is useful, but it has to be done as part of the same decision-making process as lower-level controls. - We haven't seen the long tail of humanoid robotics, but Tesla has seen the long tail of self-driving, where high and low-level decisions have to be jointly made at a pretty high framerate. - Optimus's architecture is designed in a similar way, where there's a hierarchy but it's all running as part of the same model and the latencies involved in decision making are well modeled. - This architecture will scale quite well with humanoid robots. - The distinction of the decision-making levels is only in the developer's mind. For the model, it's a continuous space of decision making, where there are dials available to make them more fine or coarse. - Humanoids have more sensor modalities and higher degrees of freedom compared to self-driving, but the fundamental constraints remain the same: you need to make real-time decisions. There's obviously a hierarchy to these control signal outputs, but the lowest frequency cannot be too low, because the safety of the robot cannot depend upon things running at very low frequencies.

Elon says FSD 14.3 is coming. But if you’ve been following along, it was also “two weeks away” a few months ago. That’s drawn a lot of criticism, understandably. Let’s step back and talk about what’s actually going on: engineering reality. I’ve spent years running engineering teams at Apple and Rivian, and what you’re seeing here is not unusual. Not even a little. I’m not here to defend Elon or say communication couldn’t be better. It could. But what’s happening behind the scenes is far more ordinary than people think. First, understand what kind of company Tesla is. Tesla exposes more of its internal process than most companies— you’re watching how the sausage is made, often in real time. Compare that to Apple. Products appear at a moment in time, fully formed. What you don’t see are the features that slipped, were cut, or quietly postponed to make the deadline. Most companies communicate through layers of marketing at discrete events (e.g., NVIDIA GTC). That may include a CEO keynote—but it’s still tightly controlled. Tesla, largely via Elon, doesn’t. And that creates friction. Most people are used to being in the dining room. With Tesla, you’re watching the sausage get made whether you like it or not. If that makes you uncomfortable, this model will drive you crazy no matter how it’s explained. Now, about FSD 14.3— the so-called “reasoning” release. My view: when Elon originally referenced it, it was real. It was on a roadmap with a timeline. But then reality hit. Somewhere along the way, engineering discussions likely exposed a fork: ship what’s partially there, or go deeper and "do it right". That kind of shift happens constantly. Plans change. Timelines slip. This is normal engineering behavior, not dysfunction. The difference is: you’re seeing it. At companies like Apple, those decisions are invisible. Deadlines are protected by cutting scope. At Tesla, you’re watching the scope evolve in real time. On the technical side, 14.1 and 14.2 were already producing “reasoning tokens,” as Ashok (Tesla AI VP) noted. But producing tokens isn’t the same as using them effectively. 14.3 appears to be where those tokens actually start driving behavior, more human-like decision-making in edge cases. My guess is this is where things got more complicated. The work likely started to overlap with what xAI is doing. At that point, the question becomes: do you ship an interim solution, or integrate a more capable reasoning layer? That’s not a small decision. And it likely has downstream impact— potentially even on Robotaxi timelines— because these same reasoning challenges show up there too. So the team probably made a call: go deeper, even if it costs time. And here’s the part people underestimate: great engineering teams often convince themselves the extra work is worth it… and that it won’t take that much longer. They’re usually wrong on the timeline. But often right on the outcome. At this stage, FSD isn’t about raw safety (it seems to have nailed that)— it’s about behavior. Making decisions feel natural, human, predictable in edge cases. That’s a much harder problem. So if you’re following Tesla closely, the best thing you can do is understand the process and accept the messiness that comes with it. If you want tightly controlled messaging and polished delivery, companies like Apple exist for that. Tesla is something else entirely. Fire away.

@SawyerMerritt @SpaceX SpaceX will ultimately deliver millions of tons to the Moon to build a self-growing city there and same for Mars


Ashok Elluswamy, Tesla's AI lead, during a GTC discussion, highlighting the fundamental similarity in AI approaches for self-driving cars and humanoid robots: - Hierarchical decision making is useful, but it has to be done as part of the same decision-making process as lower-level controls. - We haven't seen the long tail of humanoid robotics, but Tesla has seen the long tail of self-driving, where high and low-level decisions have to be jointly made at a pretty high framerate. - Optimus's architecture is designed in a similar way, where there's a hierarchy but it's all running as part of the same model and the latencies involved in decision making are well modeled. - This architecture will scale quite well with humanoid robots. - The distinction of the decision-making levels is only in the developer's mind. For the model, it's a continuous space of decision making, where there are dials available to make them more fine or coarse. - Humanoids have more sensor modalities and higher degrees of freedom compared to self-driving, but the fundamental constraints remain the same: you need to make real-time decisions. There's obviously a hierarchy to these control signal outputs, but the lowest frequency cannot be too low, because the safety of the robot cannot depend upon things running at very low frequencies.













