



Alex Chekholko
4.5K posts

@RHAlexander
Linux sysadmin, retweet usually means I want to be able to find the post later



















Together with RDW, we have officially completed the final vehicle testing phase for Full Self-Driving (Supervised) and have submitted all documentation required for the UN R-171 approval + Article 39 exemptions. The RDW team is now reviewing the documentation and test results package internally. They have communicated the expected approval for Netherlands date of 4/10, shifting from 3/20 previously and we look forward to successful completion of this cooperation. Following the Netherlands’ approval, European countries will be able to recognize this approval nationally. We are anticipating a possible EU-wide approval during the summer. Over the past 18 months, this approval has involved a series of intense documentation, development, testing, research & audits. Including but certainly not limited to: – 1,600,000+ km of FSD (Supervised) testing on EU roads – 13,000+ customer sales ride-alongs – 4,500+ track test scenario executions – Thousands of pages of written documentation for 400+ compliance requirements – Dozens of research studies into safety performance/results We're extremely proud of the work conducted with the RDW team up until this point. We very much look forward to the approval in April, and sharing FSD (Supervised) with our patient EU customers!

Zelda But It's A Modern Game (Fire Temple) 🗣️🎙️ "Acquiring the Boss key"


.@dylan522p lays out how we know the hard upper bound on how much compute can be produced annually by 2030: around 200 GW/year. That’s a crazy number (there’s about 20 GW of AI deployed in the world right now), but it’s nowhere near enough to satisfy Sam/Elon/Dario/Demis’s ambitions. Lots of things in the supply chain can be scaled up over 4 years, including things that other people think are bottlenecks, like datacenter power or fab clean room space. But the thing that’s inflexible over that timeline is the number of EUV tools. Dylan forecasts that production of ASML’s EUV tools will scale from 60 per year now to about 100 per year by the end of the decade - which means something like 700 total machines running in 2030. For a fab to make a GW worth of the Rubin chips that NVIDIA is deploying later this year, it needs to make 55,000 3nm wafers, 6,000 5nm wafers, and 170,000 memory wafers. Each 3nm wafers needs about 20 EUV passes, so about 1.1 million passes per GW. Adding on 5nm and memory, you need two million passes. Each tool can do 75 passes per hour, so with 90% uptime that’s around 600k passes per year - so a single machine can make less than a third of a GW in a year. So in 2030, we have 700 total machines, each making 0.3ish GW a year, which means we can produce 200 GW of compute a year. That’s a lot. But Sam Altman wants a gigawatt a week by the end of the decade. Anthropic and Google will be wanting about the same. And Elon wants to be putting 100 GW in space every year. Any one of these players could maybe get what they need, but not all of them.



@WilliamShatner @RobertPicardo If u had a Tesla this wouldn’t ever happen




Caleb Hammer explains why rent control doesn’t work “It’s one of those policies that sounds really good and really moral. You want to support it, landlords make less money and people pay less rent. But everywhere it’s been enacted, permitting has dropped significantly, and rents have gone up even faster for the average person, except for the few lucky ones in subsidized housing” “Units go untouched and aren’t maintained at all. I think something like 10-20% of rent controlled units in New York are empty because they can’t be brought up to standard, since it’s not worth investing in. In Massachusetts, rent control was a complete disaster and had to be repealed. In San Francisco, the moment they introduced it, permitting dropped. It just hasn’t worked”


