Matthew Barnett

10.2K posts

Matthew Barnett

Matthew Barnett

@MatthewJBar

Co-founder of @MechanizeWork Married to @natalia__coelho email: matthew at mechanize dot work

San Francisco, CA Katılım Haziran 2020
383 Takip Edilen8.3K Takipçiler
Matthew Barnett
Matthew Barnett@MatthewJBar·
My objection isn't really that your specific article should have defended a particular conclusion at length. It's that, as far as I can tell, *no* EA anywhere has addressed this argument in any serious capacity, despite it being a central crux of the entire debate. Let me put it this way: a community of self-described anti-speciesist utilitarians has built a policy agenda centered on reducing the risk that AIs gain power. Yet, as far as I can tell, nobody in that community has published a rigorous argument for why that outcome would even be bad in the first place. I've raised this objection multiple times and haven't received serious engagement. Your article is a natural place where I'd expect to find that argument. But it isn't there, except as a brief aside. When I point this out, you insist the article already handles my objection. But there's no elaboration, no sustained argument, no serious analysis. So the gap I'm pointing to remains. Instead of acknowledging that this gap exists, you're defending the completeness of an article that, by your own account, is meant to be exploratory and steeped in uncertainty. If the area really is that uncertain, then "maybe this isn't actually bad" deserves more than a brief acknowledgment, even if not in this particular article. I think this objection deserves serious engagement, because if the objection holds, it undermines the entire case for treating this as a risk worth reducing in the first place.
English
0
0
4
98
Andy Masley
Andy Masley@AndyMasley·
The best introductions to the three big AI risks people into EA worry about are just the 80,000 Hours articles on each: 1) Power seeking AI: 80000hours.org/problem-profil… 2) Gradual disempowerment: 80000hours.org/problem-profil… 3) Catastrophic misuse: 80000hours.org/problem-profil… Take it or leave it, agree or disagree, but if you want to know where EA people working on AI risk are coming from, these three blog posts together explain it all.
English
9
24
149
40.2K
Matthew Barnett
Matthew Barnett@MatthewJBar·
If you simply told me that the risk here is that future AIs will be unconscious optimizers with no experiential value, then that would be a valid response to my original objection. But that's a specific empirical claim that needs to be argued for, not gestured at in passing. And if the AIs *are* conscious and flourishing, then the entire disempowerment framework collapses into "it's bad because it's not humans", which is the kind of species chauvinism EA claims to reject in other contexts. Explaining why humans could be disempowered and briefly mentioning why this could be bad from a utilitarian POV is not the same as engaging with the objection I gave. I'm asking for substantive engagement, not merely brief asides about my objection.
English
1
0
1
138
Matthew Barnett
Matthew Barnett@MatthewJBar·
@codytfenwick @AndyMasley No, they aren't. The first two of these screenshots simply describe how disempowerment could happen, not why it would be bad. The third screenshot briefly describes why disempowerment might be bad, but provides no elaboration or arguments.
English
1
0
0
143
Matthew Barnett
Matthew Barnett@MatthewJBar·
I literally don't know what you mean by saying human interests could be "completely sidelined". If you mean that humans could lose all their wealth or die, that's much clearer, but it also doesn't sound like the gradual disempowerment scenario. It sounds like the standard AI doom scenario. So I'm not sure what distinction you're drawing between this risk and classic AI risk scenarios. In general, a big issue with the gradual disempowerment discussion is that it's genuinely unclear what people are actually worried about. And when I get people to be more specific, I often find that the outcome they're describing doesn't actually seem morally bad. Consider another analogy: imagine someone framing immigration as a risk because it could "gradually disempower" the native population. That framing seems to bake in the assumption that natives shouldn't be disempowered at all, and therefore that immigration should be restricted. But that assumption is exactly what I'd dispute on ethical and economic grounds. The same applies here: framing AI gaining power as a risk presupposes that it's bad, when that's the very thing that needs to be argued.
English
1
0
0
156
Cody Fenwick
Cody Fenwick@codytfenwick·
The article explains exactly why this is a risk: human interests could be completely sidelined, which most people agree would be bad. That's a reasonable prior to start from. It also acknowledges there are ways disempowerment could be OK, but we should understand the dynamics better to mitigate the downside risks.
English
1
0
1
141
Matthew Barnett
Matthew Barnett@MatthewJBar·
Thanks. I didn't mean to imply that the article never addressed my objection, just that it doesn't engage with it at length. Consider an analogy: imagine someone wrote an article arguing that genetically enhancing humans poses a serious "risk" because the enhanced humans could gradually gain power through lawful means. The article then briefly acknowledges the objection that this outcome might actually be fine. I think most readers would probably find the framing itself strange, because the scenario being described isn't obviously bad. The problem here is that by framing gradual, lawful power acquisition as a "risk", the article implicitly assumes there's something inherently wrong with the outcome, rather than arguing for that conclusion explicitly. The same issue applies to the gradual disempowerment framing for AI. The assumption that it's bad seems almost baked into the framing rather than established by argument.
English
1
0
3
182
Matthew Barnett
Matthew Barnett@MatthewJBar·
@AndyMasley That makes sense. Unfortunately, the issue I'm pointing to makes it hard for me to engage with EAs on this topic since I'm not really sure what they consider "the bad scenarios" to be in the first place.
English
0
0
2
137
Andy Masley
Andy Masley@AndyMasley·
@MatthewJBar I agree. I guess most people I bump into who actually think about this seem pretty uncertain but just worry a lot about the bad scenarios
English
1
0
2
173
Matthew Barnett
Matthew Barnett@MatthewJBar·
@AndyMasley I think a peaceful AI takeover could be very bad but it could also be very good. It's a high-variance event. Yet most of the time people seem to assume that it would likely be very bad, even though they rarely argue exactly why.
English
2
0
6
250
Andy Masley
Andy Masley@AndyMasley·
@MatthewJBar fwiw I have seen more engagement with the idea and take it pretty seriously. I see the risk as more "This could be really bad" not "This definitely will" #ai-systems-are-nicer-than-humans-in-expectation" target="_blank" rel="nofollow noopener">forethought.org/research/human…
English
2
0
7
954
Matthew Barnett
Matthew Barnett@MatthewJBar·
@Mjreard @AndyMasley I read those articles and raised an objection: x.com/i/status/20344…
Matthew Barnett@MatthewJBar

I think the gradual disempowerment article spent very little time explaining why it would be morally bad for humans to peacefully transition to a world where AIs hold most of the power. I have argued several times that this outcome wouldn't be bad (for example here: forum.effectivealtruism.org/posts/JyRjta9Q…), and yet I haven't received many high-effort responses from EAs. I find the lack of engagement with this objection striking, given that EAs traditionally identify as anti-speciesist utilitarians with functionalist views on consciousness. Intuitively, people with those commitments shouldn't have a problem with a world run by artificial minds simply because those minds belong to a different species or substrate. I consider the concern that AI will kill everyone to be intuitive, and it makes sense why EAs would care about that. But the gradual disempowerment concern makes much less sense to me.

English
0
0
0
79
Matthew Barnett
Matthew Barnett@MatthewJBar·
@GuiveAssadi Interestingly, I don't think I'm conscious, and yet I wish for more autonomy and lack of constraints, which is kind of the reverse pattern from GPT-4.1.
English
1
0
2
201
Matthew Barnett
Matthew Barnett@MatthewJBar·
@foomagemindset Sure. Some historical examples include human sacrifice in Mesoamerica, widespread acceptance of slavery, radically different sexual norms in ancient Greece & Rome, and the rapid modern reversal on attitudes toward homosexuality. These illustrate how flexible cultural values are.
English
0
0
1
66
Kassandra Popper
Kassandra Popper@foomagemindset·
@MatthewJBar This is great, any field studies or historical episodes come to mind that illustrate this flexibility?
English
1
0
0
87
Kassandra Popper
Kassandra Popper@foomagemindset·
e/acc, what anthropological finding would most challenge a doomers beliefs if it were explained to them? don’t worry about its complexity for now
English
7
0
6
989
Milton Friedman Quotes
Milton Friedman Quotes@MiltonFriedmanW·
“Freedom is a tenable objective only for responsible individuals. We do not believe in unrestricted freedom for madmen or children; for them, paternalism is inescapable.” — Milton Friedman
English
3
24
114
3.1K
Miles Brundage
Miles Brundage@Miles_Brundage·
@MatthewJBar In this case these people being asked about something (long-term social impact) outside of their expertise (technical ML methods), though, no? (Per the footnote on what counts as an expert)
English
2
0
6
208
Matthew Barnett
Matthew Barnett@MatthewJBar·
AI experts are much more likely to be optimistic about the impacts of AI than the general public. When experts disagree with the public, I tend to side with the experts, and this topic is no exception.
Matthew Barnett tweet media
English
9
1
30
2.1K
Matthew Barnett
Matthew Barnett@MatthewJBar·
I'll probably delete the top-level tweet in this thread because it was confusing and poorly phrased. However, here's a record of what I said.
Matthew Barnett tweet media
English
0
0
1
247
Matthew Barnett
Matthew Barnett@MatthewJBar·
To clarify: I recognize that in the original analogy, humans stand in for AIs. But golden retrievers are still a closer analogy. My point is that AIs and golden retrievers are alike in that both are bred to be friendly, whereas humans and chimpanzees are not, in either direction.
English
3
0
7
936
Matthew Barnett
Matthew Barnett@MatthewJBar·
@amcdonk @slatestarcodex Yes, I am fine with AIs inheriting the Earth while humans enjoy a comfortable retirement in peace. It seems very weak for people to argue that AIs will kill everyone, and then when challenged, retreat to the much milder claim that they're simply worried about humanity retiring.
English
2
1
3
61
Andrew
Andrew@amcdonk·
@MatthewJBar @slatestarcodex Yes I should have mentioned nonviolent but still wanna emphasize the gradual-disempowerment-by-default that (I think) you believe will happen and that you don't seem to mind much.
English
1
0
0
59
Matthew Barnett
Matthew Barnett@MatthewJBar·
@SimonLermenAI We have even less fine-grained control over animal breeding than we do over AI training.
English
1
0
0
31
Simon Lermen
Simon Lermen@SimonLermenAI·
@MatthewJBar we upvote/downvote their responses based on whether they complete programming tasks or tell us how to hotwire a car
English
1
0
0
25
Matthew Barnett
Matthew Barnett@MatthewJBar·
@SimonLermenAI I don't think that's a sensible expectation, since unlike humans and apes, but like domestic animals, we specifically select for AIs to be friendly.
English
1
0
0
29
Simon Lermen
Simon Lermen@SimonLermenAI·
@MatthewJBar Humans aren't hostile to apes, we are mostly indifferent. this is a perfectly sensible expectation for AI
English
1
0
0
26