
Post


(read in the context of
x.com/sriramk/status…)
Sriram Krishnan@sriramk
Every person here's reaction to the Jensen + @dwarkesh_sp podcast can be extrapolated *directly* from whether they believe in the frontier labs achieving short timelines for AGI/ASI. If you believe in the labs achieving RSI and then AGI/ASI (for some definition of all three) in the next few years, you'll probably sympathetic to the frame @dwarkesh_sp adopts. If not, you're probably more sympathetic to the arguments from Jensen.
English

@ShakeelHashim You're right, the compute bar for "security-relevance" is much lower. Also important:
1. Supply chain. What is Hormuz for AI? Where are the adversarial scenario generators?
2. Labs - who has access and why? Lurkers guaranteed. Einstein was a honey trap victim...
English

@ShakeelHashim You guys can make word salads til rapture comes
We know how you really feel bout humanity
There’s no putting genie back in bottle
You have identified yourself undeserving of life with rest of us
You’ll be put in underclass ruled by AIs you value so much
You chose this
English