Modescellar 2nd account

4.7K posts

Modescellar 2nd account banner
Modescellar 2nd account

Modescellar 2nd account

@ModescellarA

Violent Ex-Convict

Katılım Haziran 2021
84 Takip Edilen362 Takipçiler
Sabitlenmiş Tweet
Modescellar 2nd account
Modescellar 2nd account@ModescellarA·
Little drops of water, Little grains of sand...
Modescellar 2nd account tweet media
English
1
1
13
4.6K
Modescellar 2nd account
Modescellar 2nd account@ModescellarA·
@kunley_drukpa Nick Land had a post where he says what matters is the highest welfare, not the average welfare. Likewise, what matters is the country with the whitest whites, not the country with the highest whiteness.
English
0
0
2
45
Philippe-Antoine Hoyeck
Dear philosophy friends: I'm toying with the idea of completely overhauling my Philosophy and Popular Culture syllabus. Could you recommend me pairings of good films and philosophical classics that you think would go well together?
English
66
4
96
9K
Modescellar 2nd account
Modescellar 2nd account@ModescellarA·
@visakanv Even if everyone agreed that x was beautiful, that would determine that it wasn't subjective...
English
0
0
0
266
Modescellar 2nd account
Modescellar 2nd account@ModescellarA·
@rosie_eats after reading the v intelligent responses from our old friends in the replies, frankly I think your nan should be getting at least double your income
English
0
0
1
136
Rosie
Rosie@rosie_eats·
My only response to boomer housing discourse is that my nan on her final salary teachers pension gets the same monthly income I get as a teacher on the inner London pay scale
English
257
315
11K
654.9K
Modescellar 2nd account
Modescellar 2nd account@ModescellarA·
@shylockh Because it's an annoying, long process that demands your time, with possible kids who decide to contact you--which, may I add, may not go down well with your future gf/wife
Modescellar 2nd account tweet media
English
0
0
0
9
Jamie
Jamie@PeckingOrder03·
With some (very valid) exceptions, identifying as "British" over whatever constituent country you're from is sooo Con-Lib coalition coded.
English
25
3
438
41K
Modescellar 2nd account
Modescellar 2nd account@ModescellarA·
@ACDAFC Oxbridge needs to be more selective, not fewer Oxbridge politicians is the truth of the matter
English
0
0
0
36
Modescellar 2nd account
Modescellar 2nd account@ModescellarA·
@BovrilG African leftist leaders will really be like “We must choose between champagne for a few or drinking water for all” and then choose (in a brazen way I kinda respect) Champagne For a Few.
English
3
9
422
6.8K
Modescellar 2nd account
Modescellar 2nd account@ModescellarA·
@emergenteffects What proportion do you think? How much is due to demographic replacement via emigration/fertility rates of adventurous Anglos? How much is due to ethnic replacement now 1/4 of pop.? How much is due to ageing population? I think this would account for at least 50% of the story
English
0
0
0
31
Jardine Matheson Internationalist
@ModescellarA While this is important, I don’t actually think this is the story here. There has been a bit of a crushing of the adventurous Anglo spirit and the introduction of tall poppy syndrome since the Attlee regime. This has only solidified over time.
English
1
0
2
129
Arno About
Arno About@basedbrickpush1·
Women just send each other 12 minute voicenotes. Truly unreal. If any of my friends would send me a 12 minute voicenote I would assume he was on his deathbed sending me his last words. (And pissed off that it took so much yapping)
English
80
395
11.3K
134.4K
Modescellar 2nd account
Modescellar 2nd account@ModescellarA·
@robbensinger @thoth_iv Winning != intelligent. I can get AI that's PERFECT at tic-tac-toe just with if-else statements. Not intelligent. Stockfish is intelligent as it (at least partially) generates strategies itself.
English
1
0
0
18
Rob Bensinger ⏹️
Rob Bensinger ⏹️@robbensinger·
The original meaning of the "orthogonality thesis" was "it's possible in principle for a mind to pursue ~any goal". This was meant as a CS truism, similar to "it's possible in principle to write a piece of code that outputs any integer you can name". Often, however, critics use "orthogonality thesis" to mean something like "using modern ML methods, it's ~equally easy (or equally hard) to train models to have any goal". Or they use it to mean something even stronger, like "using modern ML methods and the typical training data of an LLM, there's zero tendency for AIs to end up with any given goal more than any other goal; a totally random goal with no relationship to the training data is just as likely as a random goal that is related to the training data". I think there's a few reasons people use "orthogonality thesis" in this straw way: 1. They've vaguely heard that "orthogonality" is a really important argument underpinning alignment concerns, and they're eager to knock down claims they mentally associate with AI pessimism, so they gravitate to interpretations of "orthogonality" that are easy to refute. 2. More generally, if the orthogonality thesis is a truism, then it's boring. It's more interesting to discuss topics that are under active dispute. 3. There's a lack of good terminology for talking about a lot of more-relevant concepts, so people repeatedly redefine terms like "orthogonality" for lack of good labels (and to make their thesis feel more weighty and important). 4. A growing number of people are only accustomed to thinking of AI as an inscrutable black box that's spat out by training. The training is the part that we can see and easily intervene on, so it's natural to reinterpret "orthogonality" as being about training processes. The original orthogonality thesis is really not about training; it's imagining that you can surgically intervene and tweak weights (or lines of code) by hand, taking as much time as needed to produce a mind with a specific exact shape. If you like, you could think of the thesis as asking, "Given some arbitrary finite amount of resources, could God build an AI that pursues [arbitrary computationally tractable goal X]?" 5. A lot of people don't understand what view the orthogonality thesis was meant to correct; and it's actually a bit tricky to explain what the view is, because it's widely held but isn't fully coherent. The view the orthogonality thesis was meant to correct is something like, "there's a ghost in the machine, a ghost of Reasonableness or Compassion, that will intervene to make sure arbitrary superintelligent AIs don't 'monomaniacally' pursue any sufficiently 'evil' or 'stupid' goal". This view can come from a few places. It can come from someone who (unconsciously) makes predictions about AI by putting themself into the AI's shows. Implicitly, they're thinking "I would never kill a child just to manufacture some paperclips, so surely the AI wouldn't either." Or it can come from vague positive associations with words like "intelligent" and "smart", which make it harder to imagine that something could possess those positive traits without being good in other respects. Or it can come from getting confused by natural language, which has lots of words like "wise" that build in the assumption that something is good at problem-solving and has some amount of personal virtue and good character/values. If a common word conflates two things, it can be harder to see that those two things can potentially come apart. Or it can come from the natural desire to come up with fun, hopeful stories about the future; perhaps paired with AI proposals that are just complicated enough for the proposer to lose track of the part of their argument where they accidentally slipped in an unargued assumption. (E.g.: power-seeking agents have an incentive to "self-improve", so even if it starts off sociopathic, surely it will become more moral. Unstated assumption: the AI is going to "self-improve" relative to what humans consider "improvements", and not relative to what the current AI considers an "improvement".) (Or: power-seeking agents have an incentive to be "creative" and "flexible" and "question their own goals". So even if the AI starts off wanting something bad, it will question that bad goal and end up changing it. Unstated assumption 1: being open-minded, curious, and questioning means you have to change your goal (even though we'd never say that a human who sticks to moral principles like "randomly killing people is bad" is being irrationally "rigid"). Unstated assumption 2: if the AI does change its goals, it will change them in the direction humans would prefer; it won't make its goals even worse, it won't go off in some weird third direction, etc.) Regardless of what exact form it takes, this kind of anthropomorphism and mysticism is still very prevalent today; I encounter it multiple times a week in real conversations. And it was very prevalent 10-20 years ago, when terms like "orthogonality thesis" were introduced; which is why this term makes a lot of sense in the context of naive "but wouldn't AIs have souls and therefore be good??" type debates, whereas researchers have to go into strange contortions in order to try to connect up orthogonality to specific modern ML results. It's a shame that everyone has decided to pollute the commons with a new redefinition of "orthogonality thesis" every few days, because many claims about alignment/friendliness tractability are neither strawmen nor truisms, and it would be great to debate those claims without every conversation getting derailed into terminology arguments. At the same time, it would be great if established facts could get treated as such without getting endlessly relitigated because people are excited to redefine them in locally conversationally convenient ways. I expect that kind of mistake from philosophy undergrads who chafe at the idea of anything turning into settled knowledge. I'd hope for better from CS people, who have seen the value of becoming bored by old obvious premises, and moving on to build edifices of knowledge atop the layered corpses of old arguments.
Rob Bensinger ⏹️ tweet media
English
22
17
203
22.9K
Modescellar 2nd account
Modescellar 2nd account@ModescellarA·
@robbensinger @thoth_iv For me it's definitional: the extent you want your AI to be intelligent, to that extent you mustn't impute what it does. To the extent you do, it ceases to be intelligent. So if you let AI optimize freely, itll drive toward better cognition/better search, & erode the original end
English
1
0
0
23
Rob Bensinger ⏹️
Rob Bensinger ⏹️@robbensinger·
Could you name an example of a real-world goal that shows up in all modern AI systems, and that you think for that reason is likely to show up in every possible AI design that someone could even in principle, with billions of years of work, create by hand?
English
2
0
0
26
Modescellar 2nd account
Modescellar 2nd account@ModescellarA·
@robbensinger @thoth_iv Typical orthogonality believer can't even understand that this is an argument (with no obvious self-awareness that they *ought to provide an argument*)
English
1
0
0
22
Modescellar 2nd account
Modescellar 2nd account@ModescellarA·
@robbensinger @thoth_iv Or, more bluntly, does targeting optimization towards a given target place limits on what you can target? "meh but you can't point out where these limits might emerge" is not sufficient imo to conclude that limits are impossible.
English
1
0
0
22