joe
6.9K posts

joe
@joelo
A dad, husband, fintech innovator, musician, hockey goalie, mountain biker, and photographer living in Port Moody, BC





Satya Nadella on why Microsoft Excel has been so durable after 40 years: > the power of lists and tables > the malleability of the software (“a blinking canvas”) > spreadsheet software is Turing complete (“I can make it do everything”) > it’s the world’s most approachable programming environment (“you get into it without even thinking your programming”)



The dream lives on, barely. #Foundation — New Episode Now Streaming







.@satyanadella on: - why he doesn’t believe in AGI but does believe in 10% economic growth - Microsoft’s new topological qubit breakthrough and gaming world models - whether Office commoditizes LLMs or the other way around Links below. Enjoy! Timestamps 0:00:00 - Intro 0:05:48 - AI won't be winner-take-all 0:16:02 - World economy growing by 10% 0:22:23 - Decreasing price of intelligence 0:31:03 - Microsoft's Quantum breakthrough 0:43:35 - Microsoft's gaming world model 0:50:35 - Legal barriers to AI 0:56:30 - Getting AGI safety right 1:05:43 - 34 years at Microsoft 1:11:31 - Does Satya Nadella believe in AGI?

DeepSeek (Chinese AI co) making it look easy today with an open weights release of a frontier-grade LLM trained on a joke of a budget (2048 GPUs for 2 months, $6M). For reference, this level of capability is supposed to require clusters of closer to 16K GPUs, the ones being brought up today are more around 100K GPUs. E.g. Llama 3 405B used 30.8M GPU-hours, while DeepSeek-V3 looks to be a stronger model at only 2.8M GPU-hours (~11X less compute). If the model also passes vibe checks (e.g. LLM arena rankings are ongoing, my few quick tests went well so far) it will be a highly impressive display of research and engineering under resource constraints. Does this mean you don't need large GPU clusters for frontier LLMs? No but you have to ensure that you're not wasteful with what you have, and this looks like a nice demonstration that there's still a lot to get through with both data and algorithms. Very nice & detailed tech report too, reading through.



FSD V13.2.1, WOW. It felt like I was controlling the car w/my mind. It did everything I wanted it to the way I wanted it to... I don't know how to describe it other than I think I'm done w/human driving. I was already having FSD do 80-90% of my drives before on V12.5, this feels like it closed the remaining gap. Last year was only about 30% of my drives on V11, I think the progression is important to describe. With V13.2.1 you feel the reduced latency and improved performance by more human-like acceleration, slowing down, merges, turns - except it's perfect every time and humans are not perfect. I'm 3 drives in, more testing to do, I'm sure there are edge cases to encounter. But this goes beyond any implicit bias, it's meaningfully better and we know there are more enhancements and features on the way... Bravo @Tesla_AI team and I can't wait for more to be able to try it themselves.






