Rob S.

2.6K posts

Rob S. banner
Rob S.

Rob S.

@RobS142

MLE. CS@Penn.

انضم Eylül 2024
1.4K يتبع271 المتابعون
Rob S.
Rob S.@RobS142·
Important to realize that while this is a joke there are in fact a lot of groups interested in AI Safety that end up just pushing the frontier of AI capabilities.
William MacAskill@willmacaskill

Forethought @forethought_org is proud to announce the launch of Deep Thought — the world's first fully automated macrostrategy researcher, and the world’s most powerful AI model. Try it here: deepthought.forethought.org Scorecard As you can see, Deep Thought is the world’s frontier AI model, getting a perfect 100/100 Frontier Macrostrategy Evaluation Score, measured by the benchmark, Post-Humanity’s Last Exam.

English
0
0
0
29
Rob S.
Rob S.@RobS142·
@GustavoRit37596 @Noahpinion Even if they don’t they will have “goals” and it’s hard to see how merging with a human improves its odds of achieving a goal like build a Dyson sphere, colonize the galaxy, paperclipping the universe etc.
English
0
0
0
12
Gustavo Ritter
Gustavo Ritter@GustavoRit37596·
@RobS142 @Noahpinion That's a good question, but assumes they will have "wants" in the sense that we do. To be seen.
English
1
0
0
30
Rob S.
Rob S.@RobS142·
@GustavoRit37596 @Noahpinion Ask yourself this. Would you merge with a chicken? Or an earthworm? Why would an artificial superintelligence want to merge with YOU?
English
1
0
0
30
Gustavo Ritter
Gustavo Ritter@GustavoRit37596·
@Noahpinion The fact that they aren't pitching us to merge with it has always seemed curious to me. Elon used to say that through Neuralink and Sama apparently has commented on himself merging with supercomputers. Maybe they think that would be too weird to say at this stage.
English
1
0
1
429
Rob S.
Rob S.@RobS142·
Can you explain how this leads to an anti doom position? “today's AI is not good enough to give us a sufficient glimpse at what alignment techniques will be needed to ensure superintelligence safety.” Also most of the labs seem to believe in quite short timelines to AGI/RSI and even LeCun has basically capitulated to short timelines so your views seem pretty non mainstream.
English
1
0
0
26
Ariel
Ariel@redtachyon·
I think my main thesis would be that today's AI is not good enough to give us a sufficient glimpse at what alignment techniques will be needed to ensure superintelligence safety. At the same time, it's not anywhere near powerful enough to pose x-risk, so it's not that urgent. (beyond perhaps some human misuse scenarios, but I view this as a separate issue) Basically it's like we're ancient greeks starting to play around with magnets, and we're worried about a nuclear apocalypse. You might see far enough into the future (and buy into that concrete thesis) where you see nuclear bombs being a threat - that doesn't mean you should stop developing magnet-related science and technology. Is it possible that a long time from now we'll develop methods that actually become scary good, and then we fail to control it and die? Sure. I don't have a marginal p(doom) estimate. But right now the potential upside of getting better AI is tremendous, so as long as we do so mindfully and ensure proper alignment progress along the way, we should be fine. And imo this is exactly the trajectory we're on currently. (note: "a long time from now" is intentionally very vague, it could be a few years, it could be decades or a century. I do expect rapid changes in the near future, but even "a few years" is probably good enough to ensure that alignment research keeps up)
English
3
0
10
253
Ariel
Ariel@redtachyon·
Funnily enough that's how I became a not-doomer. I realized that for each of the top-level doom arguments, there are fairly easy counter-arguments which at least deserve a proper response. Turns out there's no response. Doomers just cling to their cognitive dissonance and zero rationality. Kind of like a cult.
Maxime Fournes⏸️@FournesMaxime

Yep, that's basically how I became a "doomer". Did a research project to figure out what were the counter arguments to xrisk and realised... There was nothing that made sense! Just people religiously clinging to their cognitive dissonance and zero rationality. Kind of like a cult when you think about it. Quite horrific...

English
7
0
40
3K
Rob S.
Rob S.@RobS142·
@6851cf3c @terracotta_hawk @redtachyon You are seriously overstating your case that there is decades of useful work about aligning ASI. People who work in AI Safety are well aware of this work and that it either doesn’t apply to making safe ASI or simply is not useful at all.
English
1
0
1
13
There's no "I" in craftmanship
sure, that's why I said to search for the proceedings, I don't remember stuff I read a decade ago. my point was that there is a ton of research on this that he's not familiar with and therefore he shouldn't be making absolute, authoritative statements. I just linked that because I literally had it open in a tab, because I found it amusing and to show that I was talking about papers from legitimate academics and not just schizoposts on a wiki or something.
English
1
0
0
8
Ariel
Ariel@redtachyon·
>there are no good arguments that ASI alignment is possible at all This is your daily reminder that doomers don't deserve to be taken seriously
Geoffrey Miller@gmiller

@robinhanson @Aella_Girl We can be pretty sure, because there are no good arguments that ASI alignment is possible at all. Much less that ASI alignment could be solved before ASI is built. And even less that ASI would be aligned by default -- which seems to be the standard e/acc view.

English
6
0
26
2.5K
Rob S.
Rob S.@RobS142·
@MatthewJBar You get dismissed because your arguments only make sense to others with the far outside the norm moral views you hold and that most humans find repugnant.
English
0
0
0
49
Matthew Barnett
Matthew Barnett@MatthewJBar·
Personally, when I make careful anti-doom arguments, I tend to just get ignored or dismissed with brief replies. Maybe my arguments are simply terrible. But I think there's a general laziness among pessimists when it comes to seeking out and replying to their strongest critics.
Gnostrils@gnostrils

I watched The AI Doc. Could be partly an editorial thing on the part of the filmmakers, but what really stands out to me is how the optimists *never* have [valid] arguments or counterarguments. It’s just “don’t listen to the doomers”

English
25
0
100
10.4K
Rob S.
Rob S.@RobS142·
@6851cf3c @redtachyon Can you link to some of the decades of Artificial Superintelligence alignment research?
English
2
0
1
21
There's no "I" in craftmanship
@redtachyon As a professor he should respect other fields enough to at least familiarize himself with the decades of research that went into this before coming with authoritative declarations like that.
English
1
0
0
91
Mike Pepi
Mike Pepi@MikePepi·
@RichardHanania Hi! The question was a deflection. Newer models have not updated the fundamental way that LLMs work. And btw yes I do. I've used the new paid versions of Claude, Gemini, and OpenAI. :-)
English
34
1
19
6.3K
Richard Hanania
Richard Hanania@RichardHanania·
Narrator: And none of them would answer the question of whether they use the models. I’ve never seen rightists in this much denial about AI. I wonder why it’s a left-wing thing to bury your head in the sand this much.
Richard Hanania tweet mediaRichard Hanania tweet mediaRichard Hanania tweet media
English
40
13
726
152K
Rob S.
Rob S.@RobS142·
@LinchZhang @BenShindel The original tweet was about depressed people. Someone who is in hospice and also depressed seems quite different.
English
1
0
0
59
Linch
Linch@LinchZhang·
@BenShindel If I'm ever in extreme pain in hospice care, I aspire to have the emotional fortitude to ace PHQ-9 tests and be a remarkable ray of sunshine in every aspect except wanting to end my life.
English
1
1
12
578
Ben
Ben@BenShindel·
What a bizarre interaction?
Ben tweet media
English
115
9
801
75.1K
alz
alz@alz_zyd_·
@GarettJones @ndtippens @ChadJonesEcon right, so if all the low-hanging fruit is gone, another order of magnitude increase in intelligence quantity isn't going to massively accelerate science
English
2
0
1
210
alz
alz@alz_zyd_·
@MInusGix but not particularly useful stuff, otherwise we would have done it already
English
5
0
4
1.3K
Rob S.
Rob S.@RobS142·
@alz_zyd_ @bluberino123 You mean in general or if AI is better at choosing experiments than Humans that Humans would still decide for some reason?
English
1
0
0
43
alz
alz@alz_zyd_·
@bluberino123 no, you do have to decide wtf?
English
2
0
6
2.3K
Rob S.
Rob S.@RobS142·
“Trust me I’m an architect and that’s why ugly utilitarian modern buildings are actually better than beautiful traditional ones.” Architects are apparently the most pretentious and out of touch people in the world.
Bob Sacamano@PedroCo67443965

@UrbanCourtyard Call it what you want, but they are in fact imitating old buildings, which doesn't work along with functionality, materials and technology. Trust me, I'm an architect. It's popular among nostalgic and sentimental people without being full conscious about architecture.

English
0
0
0
12
Timothy B. Lee
Timothy B. Lee@binarybits·
This is the correct view of existential risk from AI, and I'm glad @deanwball sees the same connection to Hayek's thinking that I do.
Timothy B. Lee tweet media
English
26
37
292
21.8K