
Hypothesis: True full self-driving will be solved once (and only once) neural nets are trained via self-supervised learning on video data to predict the next frame (analogous to GPT-3 predicting the next text token). @karpathy
Rogs 🔍🔸
12.7K posts

@ESRogs
Curious optimist. Sincerity over sarcasm. https://t.co/YyJXMnCCxN

Hypothesis: True full self-driving will be solved once (and only once) neural nets are trained via self-supervised learning on video data to predict the next frame (analogous to GPT-3 predicting the next text token). @karpathy

who’s gonna be on the jury instructions drafting committee of the board of frontier models?




This is impressive, but it's sub-linear scaling, not super-linear. The X axis is on a log scale. This is roughly power law scaling with an exponent of 0.3 (third root of compute), which is actually quite high compared to other measures in AI scaling (exponents are often around 0.1, or tenth root), but is still steeply sub-linear. Any power law scaling on a log x axis will look like it's turning upwards non-linearly. If you put it on a linear scale, it would look like significant diminishing returns. (This all also assumes that the difficulty of the steps is consistently spaced, which it's probably not.)

this remains to perplex me






Scott Aaronson on his blog talking about Shor's, quantum, and crypto: 'When I got an early heads-up about these results—especially the Google team’s choice to "publish" via a zero-knowledge proof—I thought of Frisch and Peierls, calculating how much U-235 was needed for a chain reaction in 1940, but not publishing it, even though the latest results on nuclear fission had been openly published just the year prior. Will we, in quantum computing, also soon cross that threshold? But I got strong pushback on that analogy from the cryptography and cybersecurity people who I most respect. They said: we have decades of experience with this, and the answer is that you publish. And, they said, if publishing causes people still using quantum-vulnerable systems to crap their pants … well, maybe that’s what needs to happen right now.'




Exclusive: The White House opposes a plan from Anthropic to expand access to its powerful artificial-intelligence model Mythos on.wsj.com/4cHiUY5

