Chris Wein

279 posts

Chris Wein

Chris Wein

@chriswein

Netflix TV App Platform Team. I know more about TVs than is generally healthy.

San Jose, CA Katılım Eylül 2011
139 Takip Edilen1.1K Takipçiler
andrewthecoder
andrewthecoder@_andrewthecoder·
@Tech_girlll I reject the premise. Remember the Tyson - Paul fight? It was straight OUT for a lot of people for a good part of that fight.
English
5
0
30
13.1K
Mari
Mari@Tech_girlll·
Interviewer: Why does Netflix never lag, even with millions of people streaming at the same time?
English
83
32
795
620.3K
Chris Wein
Chris Wein@chriswein·
@levie @karpathy Agreed. This is the common thinking around a lot of new tech, overestimate change in 3yrs and underestimate change in 10
English
0
0
1
302
Aaron Levie
Aaron Levie@levie·
This is actually extremely pragmatic and realistic from @karpathy based on what is likely to happen, especially in an enterprise context. We have rapidly improving AI model capabilities, but the diffusion of these capabilities into real life workflows will take time and require lots of integration, change management, and new solutions that must be built. “Basically my AI timelines are about 5-10X pessimistic w.r.t. what you'll find in your neighborhood SF AI house party or on your twitter timeline, but still quite optimistic w.r.t. a rising tide of AI deniers and skeptics. The apparent conflict is not: imo we simultaneously 1) saw a huge amount of progress in recent years with LLMs while 2) there is still a lot of work remaining (grunt work, integration work, sensors and actuators to the physical world, societal work, safety and security work (jailbreaks, poisoning, etc.)) and also research to get done before we have an entity that you'd prefer to hire over a person for an arbitrary job in the world. I think that overall, 10 years should otherwise be a very bullish timeline for AGI, it's only in contrast to present hype that it doesn't feel that way.”
Andrej Karpathy@karpathy

My pleasure to come on Dwarkesh last week, I thought the questions and conversation were really good. I re-watched the pod just now too. First of all, yes I know, and I'm sorry that I speak so fast :). It's to my detriment because sometimes my speaking thread out-executes my thinking thread, so I think I botched a few explanations due to that, and sometimes I was also nervous that I'm going too much on a tangent or too deep into something relatively spurious. Anyway, a few notes/pointers: AGI timelines. My comments on AGI timelines looks to be the most trending part of the early response. This is the "decade of agents" is a reference to this earlier tweet x.com/karpathy/statu… Basically my AI timelines are about 5-10X pessimistic w.r.t. what you'll find in your neighborhood SF AI house party or on your twitter timeline, but still quite optimistic w.r.t. a rising tide of AI deniers and skeptics. The apparent conflict is not: imo we simultaneously 1) saw a huge amount of progress in recent years with LLMs while 2) there is still a lot of work remaining (grunt work, integration work, sensors and actuators to the physical world, societal work, safety and security work (jailbreaks, poisoning, etc.)) and also research to get done before we have an entity that you'd prefer to hire over a person for an arbitrary job in the world. I think that overall, 10 years should otherwise be a very bullish timeline for AGI, it's only in contrast to present hype that it doesn't feel that way. Animals vs Ghosts. My earlier writeup on Sutton's podcast x.com/karpathy/statu… . I am suspicious that there is a single simple algorithm you can let loose on the world and it learns everything from scratch. If someone builds such a thing, I will be wrong and it will be the most incredible breakthrough in AI. In my mind, animals are not an example of this at all - they are prepackaged with a ton of intelligence by evolution and the learning they do is quite minimal overall (example: Zebra at birth). Putting our engineering hats on, we're not going to redo evolution. But with LLMs we have stumbled by an alternative approach to "prepackage" a ton of intelligence in a neural network - not by evolution, but by predicting the next token over the internet. This approach leads to a different kind of entity in the intelligence space. Distinct from animals, more like ghosts or spirits. But we can (and should) make them more animal like over time and in some ways that's what a lot of frontier work is about. On RL. I've critiqued RL a few times already, e.g. x.com/karpathy/statu… . First, you're "sucking supervision through a straw", so I think the signal/flop is very bad. RL is also very noisy because a completion might have lots of errors that might get encourages (if you happen to stumble to the right answer), and conversely brilliant insight tokens that might get discouraged (if you happen to screw up later). Process supervision and LLM judges have issues too. I think we'll see alternative learning paradigms. I am long "agentic interaction" but short "reinforcement learning" x.com/karpathy/statu…. I've seen a number of papers pop up recently that are imo barking up the right tree along the lines of what I called "system prompt learning" x.com/karpathy/statu… , but I think there is also a gap between ideas on arxiv and actual, at scale implementation at an LLM frontier lab that works in a general way. I am overall quite optimistic that we'll see good progress on this dimension of remaining work quite soon, and e.g. I'd even say ChatGPT memory and so on are primordial deployed examples of new learning paradigms. Cognitive core. My earlier post on "cognitive core": x.com/karpathy/statu… , the idea of stripping down LLMs, of making it harder for them to memorize, or actively stripping away their memory, to make them better at generalization. Otherwise they lean too hard on what they've memorized. Humans can't memorize so easily, which now looks more like a feature than a bug by contrast. Maybe the inability to memorize is a kind of regularization. Also my post from a while back on how the trend in model size is "backwards" and why "the models have to first get larger before they can get smaller" x.com/karpathy/statu… Time travel to Yann LeCun 1989. This is the post that I did a very hasty/bad job of describing on the pod: x.com/karpathy/statu… . Basically - how much could you improve Yann LeCun's results with the knowledge of 33 years of algorithmic progress? How constrained were the results by each of algorithms, data, and compute? Case study there of. nanochat. My end-to-end implementation of the ChatGPT training/inference pipeline (the bare essentials) x.com/karpathy/statu… On LLM agents. My critique of the industry is more in overshooting the tooling w.r.t. present capability. I live in what I view as an intermediate world where I want to collaborate with LLMs and where our pros/cons are matched up. The industry lives in a future where fully autonomous entities collaborate in parallel to write all the code and humans are useless. For example, I don't want an Agent that goes off for 20 minutes and comes back with 1,000 lines of code. I certainly don't feel ready to supervise a team of 10 of them. I'd like to go in chunks that I can keep in my head, where an LLM explains the code that it is writing. I'd like it to prove to me that what it did is correct, I want it to pull the API docs and show me that it used things correctly. I want it to make fewer assumptions and ask/collaborate with me when not sure about something. I want to learn along the way and become better as a programmer, not just get served mountains of code that I'm told works. I just think the tools should be more realistic w.r.t. their capability and how they fit into the industry today, and I fear that if this isn't done well we might end up with mountains of slop accumulating across software, and an increase in vulnerabilities, security breaches and etc. x.com/karpathy/statu… Job automation. How the radiologists are doing great x.com/karpathy/statu… and what jobs are more susceptible to automation and why. Physics. Children should learn physics in early education not because they go on to do physics, but because it is the subject that best boots up a brain. Physicists are the intellectual embryonic stem cell x.com/karpathy/statu… I have a longer post that has been half-written in my drafts for ~year, which I hope to finish soon. Thanks again Dwarkesh for having me over!

English
17
30
411
163.7K
ThePrimeagen
ThePrimeagen@ThePrimeagen·
the problem of pure democracy is that you get auto-play on netflix home page
English
47
17
825
49.1K
Drake
Drake@Drakeb4Degrassi·
@BaseballJeff1 What in your mind is the difference between Fitz last year and this year? Last year he was a major bright spot offensively, this year he’s become a black hole at times
English
4
0
5
2.3K
Jeff Young
Jeff Young@BaseballJeff1·
Casey Schmitt should be activated in the next day or so, opening up a 3-week window before the trade deadline. That should be enough time to evaluate what he can offer. He does not necessarily need to replicate the numbers he put up in June, but show he can be competent at...
English
13
5
290
30.5K
Chris Wein
Chris Wein@chriswein·
@DanielEstrin Reviewing old RadioLab episodes…were you the Jerusalem walker?
English
1
0
0
179
Dan Charles
Dan Charles@DanCharlesNow·
I honestly haven't been on this site for a long time, and maybe most of the people who technically follow me have bolted in the meantime. Where have people gone? Threads? Non-digital life?
English
2
0
3
374
Chris Wein
Chris Wein@chriswein·
@petfoodexpress How is this new B3G4F supposed to work if no store near me stocks more than 2 of (ex) Nulo Senior 24lb? Previously store would order in for me..,no longer? I don’t want to be forced into mix/match
English
0
0
0
172
Chris Wein
Chris Wein@chriswein·
@SergioMarBel Congratulations on the move to @NPR. Your reporting on the Ken Paxton impeachment was very good and a nice springboard. I hope to hear more from you leading up to the election
English
0
0
1
134
Sergio Martínez-Beltrán
Sergio Martínez-Beltrán@SergioMarBel·
U.S. Border Patrol agents chase after a man in the desert (Sunland Park, NM) in nearly 103F heat. I’m on the U.S.-Mexico border reporting on the effects of the Biden administration’s severe asylum restrictions, which experts say will push migrants to dangerous remote areas.
English
2
3
6
1.7K
Gergely Orosz
Gergely Orosz@GergelyOrosz·
@ThePrimeagen @t3dotgg Yes but they complain about people your age not knowing about you or watching your streams the same time ha!
English
3
0
25
5.7K
Gergely Orosz
Gergely Orosz@GergelyOrosz·
Amusing (and light-hearted) observations from GenZ software engineers about their "older" colleagues. I've known about @t3dotgg and @theprimeagen as dev channels to watch on YouTube, but not about some other channels. And yes, I joke about 'bus factor.'
Gergely Orosz tweet media
English
84
60
1.1K
196.3K
Chris Wein
Chris Wein@chriswein·
@dieter Been so long I wondered if you would remember the password. You should join us on Threads…
English
0
0
0
88
Mark Gurman
Mark Gurman@markgurman·
Netflix has built apps for the Nvidia Shield and the Facebook Portal. So I don’t buy the argument that the Vision Pro isn’t worth the investment. I don’t even know what the Shield is (sarcasm, but you get the point) and the Portal has been dead for over a year.
Mark Gurman@markgurman

Netflix co-CEO on lack of a Vision Pro app: "We have to be careful about making sure that we're not investing in places that are not really yielding a return, and I would say we'll see where things go with Vision Pro.” Not unchecking a box to enable their iPad app is a huge investment.

English
123
93
1.4K
340.2K
Mark Gurman
Mark Gurman@markgurman·
NEW: Netflix snubs the Vision Pro and is not planning a visionOS app, nor will it allow its iPad app to run on the headset. Instead, it’ll tell users to watch Netflix from the web browser. Apple has many other entertainment apps signing on however. bloomberg.com/news/articles/…
English
195
117
1.5K
582.7K
David Morris
David Morris@DavidMo79419828·
@wilnerhotline Didn't Washington make the playoffs in the 2016/2017 season? Unless you're saying UW and Oregon are the only 2 pac 12 teams to make it. Cuz pretty sure Bama beat Washington 24-7 in the 2016/2017 Peach Bowl. So it would be 3 teams that qualified for the CFP.
English
2
0
0
782
Jon Wilner
Jon Wilner@wilnerhotline·
Final: Washington 37, Texas 31 - Another dull finish for Huskies - UW: hasn’t lost in 450 days - Penix: 29-38/2-0/430 yards - Early NCG line: Michigan -4 Pac-12’s only two CFP finalists come in first (Oregon) and last (UW) years of the 4-team field
English
9
13
235
24.1K
Jon Wilner
Jon Wilner@wilnerhotline·
ESPN currently shows USC's win probability as 99.7%. Methinks ESPN needs a new algorithm.
English
6
5
78
14.1K
David Kanter
David Kanter@TheKanter·
@mayfer That’s like asking what programmers do that is useful, since everyone is using a compiler :) You might also want a debugger, profiler, linker, libraries, etc.
English
1
0
1
346
murat 🍥
murat 🍥@mayfer·
so if the machines that build 3nm chips using EUV are made by the Dutch company ASML, what makes TSMC so special? all they do is buy the machines and use them well?
English
54
7
180
105.9K
Chris Wein
Chris Wein@chriswein·
@karaswisher Did you not call Gayle King for prime time in your Chris Licht interview? Looking forward to your victory lap!
English
0
0
0
54