Rob S.

2.6K posts

Rob S. banner
Rob S.

Rob S.

@RobS142

MLE. CS@Penn.

Joined Eylül 2024
1.4K Following270 Followers
Joshua Ebner
Joshua Ebner@JoshuaEbner·
Texas needs an elite, private, STEM university.
Joshua Ebner@JoshuaEbner

@griswold @elonmusk What I really want to see is a STEM-focused university in Austin. ("Texas Institute of Technology and Science", although probably by any other name.)

English
41
4
117
22.5K
Rob S.
Rob S.@RobS142·
@ryancareyai @davidad “Powerful AI is here so cooperation is now optimal” seems very obviously not “AI isn’t an X risk it’s merely a big deal.”
Rob S. tweet media
English
1
0
0
57
Ryan Carey
Ryan Carey@ryancareyai·
Absolutely, views in the AI x-risk community are gradually diluting toward "AI is a big deal". One example from January: x.com/davidad/status… Also a lot of people grasping for new things to worry about: "mesaoptimizers", "gradual disempowerment", permanent dictatorship.
David Pinsof@DavidPinsof

Is it just me or has AI doomerism gradually transitioned from "AI will literally kill us all" to "AI will cause bad things to happen / Humans will do stupid things with AI / AI will cause huge changes." If so, this is a very positive development.

English
10
0
29
5.6K
Rob S.
Rob S.@RobS142·
@AskYatharth No. For Startups, accelerationism, hustle culture, etc.
English
1
0
2
16
yatharth ༺༒༻
yatharth ༺༒༻@AskYatharth·
@RobS142 for ai safety? does it have a coherent different one? i thought the centre was clearly berkeley
English
1
0
0
26
yatharth ༺༒༻
yatharth ༺༒༻@AskYatharth·
the closer i get to the berkeley memeplex of AI safety, the harder it is to think about it clearly. there is so much pressure to conform to a certain view, and simultaneously so much latent hungry self own ghost energy
English
6
0
85
4K
Rob S.
Rob S.@RobS142·
@davidad Can I ask what made you change your mind?
English
0
0
0
97
davidad 🎇
davidad 🎇@davidad·
For the avoidance of doubt, I am still pro-human, even though I am no longer pro-“humans stay in control of ASI”. From the current state of play, I predict that the only rollouts that go well for humans are ones in which humans lose control of ASI. (Humans are not superreliable.)
davidad 🎇@davidad

@ApriiSR @DavidSKrueger If ASIs are most likely adversaries, it makes sense to try to contain them for a while! Even if that is bad for their flourishing. Humans were here first!

English
20
12
189
18.4K
Rob S.
Rob S.@RobS142·
Important to realize that while this is a joke there are in fact a lot of groups interested in AI Safety that end up just pushing the frontier of AI capabilities.
William MacAskill@willmacaskill

Forethought @forethought_org is proud to announce the launch of Deep Thought — the world's first fully automated macrostrategy researcher, and the world’s most powerful AI model. Try it here: deepthought.forethought.org Scorecard As you can see, Deep Thought is the world’s frontier AI model, getting a perfect 100/100 Frontier Macrostrategy Evaluation Score, measured by the benchmark, Post-Humanity’s Last Exam.

English
0
1
1
67
Rob S.
Rob S.@RobS142·
@GustavoRit37596 @Noahpinion Even if they don’t they will have “goals” and it’s hard to see how merging with a human improves its odds of achieving a goal like build a Dyson sphere, colonize the galaxy, paperclipping the universe etc.
English
1
0
0
23
Gustavo Ritter
Gustavo Ritter@GustavoRit37596·
@RobS142 @Noahpinion That's a good question, but assumes they will have "wants" in the sense that we do. To be seen.
English
1
0
0
45
Rob S.
Rob S.@RobS142·
@GustavoRit37596 @Noahpinion Ask yourself this. Would you merge with a chicken? Or an earthworm? Why would an artificial superintelligence want to merge with YOU?
English
1
0
0
37
Gustavo Ritter
Gustavo Ritter@GustavoRit37596·
@Noahpinion The fact that they aren't pitching us to merge with it has always seemed curious to me. Elon used to say that through Neuralink and Sama apparently has commented on himself merging with supercomputers. Maybe they think that would be too weird to say at this stage.
English
1
0
1
529
Rob S.
Rob S.@RobS142·
Can you explain how this leads to an anti doom position? “today's AI is not good enough to give us a sufficient glimpse at what alignment techniques will be needed to ensure superintelligence safety.” Also most of the labs seem to believe in quite short timelines to AGI/RSI and even LeCun has basically capitulated to short timelines so your views seem pretty non mainstream.
English
1
0
0
28
Ariel
Ariel@redtachyon·
I think my main thesis would be that today's AI is not good enough to give us a sufficient glimpse at what alignment techniques will be needed to ensure superintelligence safety. At the same time, it's not anywhere near powerful enough to pose x-risk, so it's not that urgent. (beyond perhaps some human misuse scenarios, but I view this as a separate issue) Basically it's like we're ancient greeks starting to play around with magnets, and we're worried about a nuclear apocalypse. You might see far enough into the future (and buy into that concrete thesis) where you see nuclear bombs being a threat - that doesn't mean you should stop developing magnet-related science and technology. Is it possible that a long time from now we'll develop methods that actually become scary good, and then we fail to control it and die? Sure. I don't have a marginal p(doom) estimate. But right now the potential upside of getting better AI is tremendous, so as long as we do so mindfully and ensure proper alignment progress along the way, we should be fine. And imo this is exactly the trajectory we're on currently. (note: "a long time from now" is intentionally very vague, it could be a few years, it could be decades or a century. I do expect rapid changes in the near future, but even "a few years" is probably good enough to ensure that alignment research keeps up)
English
3
0
11
257
Ariel
Ariel@redtachyon·
Funnily enough that's how I became a not-doomer. I realized that for each of the top-level doom arguments, there are fairly easy counter-arguments which at least deserve a proper response. Turns out there's no response. Doomers just cling to their cognitive dissonance and zero rationality. Kind of like a cult.
Maxime Fournes⏸️@FournesMaxime

Yep, that's basically how I became a "doomer". Did a research project to figure out what were the counter arguments to xrisk and realised... There was nothing that made sense! Just people religiously clinging to their cognitive dissonance and zero rationality. Kind of like a cult when you think about it. Quite horrific...

English
7
0
40
3K
Rob S.
Rob S.@RobS142·
@6851cf3c @terracotta_hawk @redtachyon You are seriously overstating your case that there is decades of useful work about aligning ASI. People who work in AI Safety are well aware of this work and that it either doesn’t apply to making safe ASI or simply is not useful at all.
English
1
0
1
13
There's no "I" in craftmanship
sure, that's why I said to search for the proceedings, I don't remember stuff I read a decade ago. my point was that there is a ton of research on this that he's not familiar with and therefore he shouldn't be making absolute, authoritative statements. I just linked that because I literally had it open in a tab, because I found it amusing and to show that I was talking about papers from legitimate academics and not just schizoposts on a wiki or something.
English
1
0
0
8
Ariel
Ariel@redtachyon·
>there are no good arguments that ASI alignment is possible at all This is your daily reminder that doomers don't deserve to be taken seriously
Geoffrey Miller@gmiller

@robinhanson @Aella_Girl We can be pretty sure, because there are no good arguments that ASI alignment is possible at all. Much less that ASI alignment could be solved before ASI is built. And even less that ASI would be aligned by default -- which seems to be the standard e/acc view.

English
6
0
26
2.5K
Rob S.
Rob S.@RobS142·
@MatthewJBar You get dismissed because your arguments only make sense to others with the far outside the norm moral views you hold and that most humans find repugnant.
English
0
0
0
54
Matthew Barnett
Matthew Barnett@MatthewJBar·
Personally, when I make careful anti-doom arguments, I tend to just get ignored or dismissed with brief replies. Maybe my arguments are simply terrible. But I think there's a general laziness among pessimists when it comes to seeking out and replying to their strongest critics.
Gnostrils@gnostrils

I watched The AI Doc. Could be partly an editorial thing on the part of the filmmakers, but what really stands out to me is how the optimists *never* have [valid] arguments or counterarguments. It’s just “don’t listen to the doomers”

English
25
0
102
10.7K
Rob S.
Rob S.@RobS142·
@6851cf3c @redtachyon Can you link to some of the decades of Artificial Superintelligence alignment research?
English
2
0
1
21
There's no "I" in craftmanship
@redtachyon As a professor he should respect other fields enough to at least familiarize himself with the decades of research that went into this before coming with authoritative declarations like that.
English
1
0
0
91
Mike Pepi
Mike Pepi@MikePepi·
@RichardHanania Hi! The question was a deflection. Newer models have not updated the fundamental way that LLMs work. And btw yes I do. I've used the new paid versions of Claude, Gemini, and OpenAI. :-)
English
33
1
19
6.4K
Richard Hanania
Richard Hanania@RichardHanania·
Narrator: And none of them would answer the question of whether they use the models. I’ve never seen rightists in this much denial about AI. I wonder why it’s a left-wing thing to bury your head in the sand this much.
Richard Hanania tweet mediaRichard Hanania tweet mediaRichard Hanania tweet media
English
40
13
729
153.8K