Post

Shakeel
Shakeel@ShakeelHashim·
"The rhetoric of 'we should sell China more chips' ... pivots not so much on short timelines to AGI but instead short timelines to models that *matter,* national-security-wise." 100% this — nicely articulated
Dean W. Ball@deanwball

For a moment, substitute the notion of “believing in short AGI timelines” for: “acknowledging the idea of AGI as an ill-defined thing that will nonetheless probably exist within a strategically relevant timeframe, the pursuit of which will produce importantly capable artifacts along the way, whose arrival will be even sooner than the so-called ‘AGI,’ and so we don’t really need to quibble all that much about exact AGI definitions and timelines, because the mega-capable artifacts already kinda resemble ‘AGI,’ have national security implications, and seem like they’re going to keep improving rapidly, so functionally we just have to accept that we live in AGI world now, regardless of whether one’s personal definition of AGI is satisfied in 2028, 2035, or, indeed, 2026.” If this was your view—and it is mine—then it is not so much about short timelines to AGI as it is “short timelines to the importantly useful artifacts produced along the path to AGI, so capable that maybe in some ways they blend into AGI.” Thus “Mythos” or “the latest frontier model” can be substituted for “AGI” in many debates about timelines. The rhetoric of “we should sell China more chips” or “AI is the next internet platform business and it should we regulated exactly like prior waves of internet platform businesses, which is to say ‘basically not at all’” pivots not so much on short timelines to AGI but instead short timelines to models that *matter,* national-security-wise. The fatal flaw in the 2024-era accelerationist view, epitomized by Jensen, was that models would never matter in this way; or at least, that you should not think so much about the world in which models mattered in that way. It is much easier to justify “doing what we have been doing” if you don’t believe neural networks will ever truly matter to national security. Basically all AI debates hinge not so much on “AGI timelines” but on “will LLMs ever matter, really, to national security.” The near-term existence of LLMs with national-security-relevant capabilities can therefore be thought of as, to borrow a phrase, an inconvenient truth.

English
3
2
12
982
Shakeel
Shakeel@ShakeelHashim·
(read in the context of x.com/sriramk/status…)
Sriram Krishnan@sriramk

Every person here's reaction to the Jensen + @dwarkesh_sp podcast can be extrapolated *directly* from whether they believe in the frontier labs achieving short timelines for AGI/ASI. If you believe in the labs achieving RSI and then AGI/ASI (for some definition of all three) in the next few years, you'll probably sympathetic to the frame @dwarkesh_sp adopts. If not, you're probably more sympathetic to the arguments from Jensen.

English
0
0
2
270
Perp City
Perp City@perpdotcity·
@ShakeelHashim You're right, the compute bar for "security-relevance" is much lower. Also important: 1. Supply chain. What is Hormuz for AI? Where are the adversarial scenario generators? 2. Labs - who has access and why? Lurkers guaranteed. Einstein was a honey trap victim...
English
0
0
0
34
Pygmalion
Pygmalion@tailpygmalion·
@ShakeelHashim You guys can make word salads til rapture comes We know how you really feel bout humanity There’s no putting genie back in bottle You have identified yourself undeserving of life with rest of us You’ll be put in underclass ruled by AIs you value so much You chose this
English
0
0
0
60
Paylaş