
Rob S.
2.6K posts





Is it just me or has AI doomerism gradually transitioned from "AI will literally kill us all" to "AI will cause bad things to happen / Humans will do stupid things with AI / AI will cause huge changes." If so, this is a very positive development.


@ApriiSR @DavidSKrueger If ASIs are most likely adversaries, it makes sense to try to contain them for a while! Even if that is bad for their flourishing. Humans were here first!

Forethought @forethought_org is proud to announce the launch of Deep Thought — the world's first fully automated macrostrategy researcher, and the world’s most powerful AI model. Try it here: deepthought.forethought.org Scorecard As you can see, Deep Thought is the world’s frontier AI model, getting a perfect 100/100 Frontier Macrostrategy Evaluation Score, measured by the benchmark, Post-Humanity’s Last Exam.



The AI labs have actually done a bad job explaining what the future they are building towards will actually look like for most of us. Even “Machines of Loving Grace” has very few well-articulated visions of what Anthropic hopes life will be like if they succeed at their goals.



at the intersection of i've basically been replaced and i've never worked so hard in my life

This is a great paper but contains a puzzle: forecasters expect even if we automate most labor and wait 20 years, GDP will only increase by 45%. I would love to hear how people are thinking about this.



Yep, that's basically how I became a "doomer". Did a research project to figure out what were the counter arguments to xrisk and realised... There was nothing that made sense! Just people religiously clinging to their cognitive dissonance and zero rationality. Kind of like a cult when you think about it. Quite horrific...



@robinhanson @Aella_Girl We can be pretty sure, because there are no good arguments that ASI alignment is possible at all. Much less that ASI alignment could be solved before ASI is built. And even less that ASI would be aligned by default -- which seems to be the standard e/acc view.


I watched The AI Doc. Could be partly an editorial thing on the part of the filmmakers, but what really stands out to me is how the optimists *never* have [valid] arguments or counterarguments. It’s just “don’t listen to the doomers”






One of the biggest misconceptions people have about intelligence is seeing it as some kind of unbounded scalar stat, like height. "Future AI will have 10,000 IQ", that sort of thing. Intelligence is a conversion ratio, with an optimality bound. Increasing intelligence is not so much like "making the tower taller", it's more like "making the ball rounder". At some point it's already pretty damn spherical and any improvement is marginal. Now of course smart humans aren't quite at the optimal bound yet on an individual level, and machines will have many advantages besides intelligence -- mostly the removal of biological bottlenecks: greater processing speed, unlimited working memory, unlimited memory with perfect recall... but these are mostly things humans can also access through externalized cognitive tools.





