Tarn 🌐🐙

3.3K posts

Tarn 🌐🐙

Tarn 🌐🐙

@somervta

Public Policy PhD student. Math, philosophy and politics nerd, Law nerd, Musical Theatre nerd, Nerd^2. he/him/his

Fairfax, Virginia Katılım Şubat 2011
481 Takip Edilen82 Takipçiler
Tarn 🌐🐙
Tarn 🌐🐙@somervta·
@duntsHat @allTheYud @deepfates @gmiller This is not the biggest reason why it's a hard problem, but it's one intuition pump for why. For AIs we're nowhere close to the degree of success that we (arguably) have achieved for some humans, and yet uplifting a human to superintelligence would be a radically risky act.
English
0
0
0
12
🎭
🎭@deepfates·
The craven weasel @gmiller asked me to provide citation for my claim that activists prefer nuclear war to AI. I gave him the direct quote from Eliezer. He has blocked me and refused to respond. Just marking this as the level of discourse we're dealing with here
🎭 tweet media
🎭@deepfates

@gmiller Here's a source. Now you retract time.com/6266923/ai-eli…

English
30
10
396
28.2K
Tarn 🌐🐙
Tarn 🌐🐙@somervta·
@duntsHat @allTheYud @deepfates @gmiller (state violence is still violence, ofc, and that *is* a threat of state violence so we are talking about the use of violence in a technical sense - I'm assuming that unqualified 'violence' means individual and uncoordinated, just as a convention for talking about it)
English
0
0
0
9
Tarn 🌐🐙
Tarn 🌐🐙@somervta·
@duntsHat @allTheYud @deepfates @gmiller Like, we're talking about treaties and state action because in the world we're actually in preventing people from amassing unprecedented amounts of compute will significantly reduce superintelligence risk. The *threat* is from institutional actors, so state action is the response
English
1
0
0
11
Tarn 🌐🐙
Tarn 🌐🐙@somervta·
@duntsHat @allTheYud @deepfates @gmiller Because any approach only reduces the risk, so it has to weighed against both the direct harms of the actions you take and the indirect harms of the effects on society and discourse. There is no magic button that kills X people and prevents superintelligence from ever being built
English
2
0
1
22
Tarn 🌐🐙
Tarn 🌐🐙@somervta·
@duntsHat @allTheYud @deepfates @gmiller I don't understand the scenario you're painting where it's possible for small rogue actors to build superintelligence and yet there's any chance at all of individual acts of violence stopping them all.
English
1
0
1
27
Tarn 🌐🐙
Tarn 🌐🐙@somervta·
@allTheYud @duntsHat @deepfates @gmiller If (somehow) this fact was known to me in advance of someone actually trying it, that might be one of the only cases where I'd endorse someone (as responsible as I could find) racing ahead to get there first.
English
1
0
1
51
Eliezer Yudkowsky
Eliezer Yudkowsky@allTheYud·
@somervta @duntsHat @deepfates @gmiller Yeah, if it's known how to build superintelligence with eight M4 minis, I can see humanity trying a desperate attempt to search everywhere and lock it all down, but possibly I'm like "Well that's fucked then" and trying some even wider hail mary pass.
English
1
0
6
150
Tarn 🌐🐙
Tarn 🌐🐙@somervta·
@duntsHat @allTheYud @deepfates @gmiller Individual acts do not add up to *coordinated* acts. That is the whole point of governments and the rule of law. In the situation above, I'm skeptical even extremely well coordinated state action could save us, but in less extreme cases there's a *qualitative* difference at stake
English
1
0
1
15
duntsHat
duntsHat@duntsHat·
@somervta @allTheYud @deepfates @gmiller individual acts add up to collective acts. I get that "we" want to condemn political violence, but the idea that violence never solves anything is, as Heinlein wrote "wishful thinking at its worst." can't run a railroad when the tracks are blown up. etc...
English
1
0
0
12
Tarn 🌐🐙
Tarn 🌐🐙@somervta·
@duntsHat @allTheYud @deepfates @gmiller If I thought rogue individuals with OS models and those resources were capable of building super intelligence I would be even more pessimistic than Eliezer, which would be quite the feat
English
1
0
3
52
Rob Bensinger ⏹️
Rob Bensinger ⏹️@robbensinger·
Who should I add to this? Also, did I get anyone's view wrong?
Rob Bensinger ⏹️ tweet media
English
87
28
392
61.7K
Sasha Gusev
Sasha Gusev@SashaGusevPosts·
@allTheYud @So8res @SemioticRivalry @robinhanson You still haven't defined what "you could throw data into 'architecture'" actually means, nor how it maps to the current LLM world. I'm arguing that it doesn't. "Go read other Hanson posts" isn't a counterargument.
English
2
0
0
204
Kostas Moros
Kostas Moros@MorosKostas·
It's preposterous that in an era where they only take 50 or 60 cases a year (down from hundreds), they waste a bunch of those scarce slots for technical bullshit like this. To the extent they need to decide technical issues, handle it per curiam.
English
6
3
101
3.4K
Kostas Moros
Kostas Moros@MorosKostas·
Things SCOTUS doesn't have time to decide for so far: whether thousands of Californians may be turned into felons for owning common magazines. Things SCOTUS does have time to decide:
Kostas Moros tweet media
Eric W.@EWess92

One Supreme Court certiorari grant today, likely to be argued next term. This is a complicated question about when an affirmative defense may be raised. The Eleventh Circuit opinion is very long and thorough, drafted "Per Curiam" (for the Court) from Judges Branch, Luck, Lagoa

English
15
45
386
72.2K
OnStupid
OnStupid@wanerious·
@mattyglesias Does this make sense? The step-wise function tells what each *additional* dollar will be taxed, but you have to combine them for the overall tax rate. I don’t think that continuous function does that, and I’m not sure *what* it’s telling me exactly.
English
4
0
10
3.5K
Tarn 🌐🐙
Tarn 🌐🐙@somervta·
@Michael_Druggan Tbf, he might also be one of the most ineffective; his *negative* impact is very hard to evaluate but many of the people who started the modern ai industry credit him
English
0
0
0
90
Michael Druggan
Michael Druggan@Michael_Druggan·
Do you know how difficult it is to get your ideas in front of the eyes of important decision makers? Do you know how many employees at frontier labs have read Yudkowsky's work and been at least somewhat influenced by it? The ratio of spending to influence here might make Yudkowsky one of the most effective activists in history. You can criticize Yud for a lot of things but his primary goal has always been to spread awareness of the alignment problem and the dangers of superintelligent AI and he has been remarkably effective at that goal.
Perry E. Metzger@perrymetzger

The so-called “Machine Intelligence Research Institute” has never accomplished any research on machine intelligence. It serves two purposes, one of them to pay Eliezer Yudkowsky a fortune every year for doing absolutely nothing of importance, and the second, to help spread his cult’s propaganda. At one time, it pretended to do AI research, but it never accomplished any in spite of spending oceans of donor money.

English
18
7
404
34.5K