Richard Y. Chappell🔸

5.7K posts

Richard Y. Chappell🔸

Richard Y. Chappell🔸

@RYChappell

Academic Philosopher. Posts better stuff at https://t.co/jwkU1JxzCj 🔸10% Pledge #54 with @GivingWhatWeCan

Miami, FL เข้าร่วม Eylül 2011
251 กำลังติดตาม1.9K ผู้ติดตาม
ทวีตที่ปักหมุด
Richard Y. Chappell🔸
Richard Y. Chappell🔸@RYChappell·
Maybe I'll take this opportunity to re-share some of my favourite posts. 🧵 (1) 'My Big Ideas' highlights and explains five major themes from my work goodthoughts.blog/p/my-big-ideas
English
3
2
19
10.5K
Jeremy Pierce
Jeremy Pierce@TheParableMan·
@RYChappell In one sense all arguments are question-begging, because they assume premises that the other side might not grant. Jose Benardete pointed this out in class one time when I was a student. I think you're making the same point.
English
1
0
1
57
Richard Y. Chappell🔸
Richard Y. Chappell🔸@RYChappell·
I agree there's overlap between the movements. But there's still a difference between peripheral associates and paradigm representatives of a view, and it's worth being clear about which is which! Even more importantly, it's worth correctly identifying what core reasons underlie various disagreements. It makes a difference whether the crux of your disagreement concerns empirical p(doom) estimates or cluelessness worries or liberal vs authoritarian political philosophies or moral disagreements about how much future generations matter in principle, etc.
English
0
0
2
29
Neil Chilson ⤴️⬆️🆙📈 🚀
X risk is an EA longtermist concept, AI x risk grew out of that community, yud is early in that network too. I’m sure you can “no true EA” your way out of any association and maybe that’s a fun academic exercise, but the political reality is that EA and AI x-risk are highly overlapping communities. I will treat them as such for now. I’m sure I could learn a lot more from you and I look forward to it in the future.
English
1
0
1
44
Richard Y. Chappell🔸
Richard Y. Chappell🔸@RYChappell·
I don't think Yudkowsky is an EA or a longtermist? (He thinks AGI is overwhelmingly likely to soon kill everyone who currently exists. Any moral theory combined with that empirical belief is presumably going to have some extreme implications.) And afaict, even his position doesn't seem to rest on naive instrumentalism but rather something more heuristic like "treating AI like nuclear weapons". I don't think that enforcing nuclear non-proliferation treaties is well characterized as "oppressive". If the Pause folks are mistaken, it's due to excessive precaution (like the anti-nuclear energy folks) -- kind of the opposite of a recklessly naive approach to cost-benefit analysis! So I think your diagnostics are way off, here. (It's fine to disagree on the policy merits with the various folks you mention, of course. I disagree with them myself! I just don't think you've charitably identified the reasoning that underlies their view.)
English
1
0
2
29
Richard Y. Chappell🔸
Richard Y. Chappell🔸@RYChappell·
@neil_chilson @mattyglesias I agree that sort of naive instrumentalist calculation is bad. That's why one should oppose naive instrumentalism (a specific and clearly daft kind of decision procedure)! It's not like "some oppression now is worth it to save many current lives" would be any better.
English
1
0
1
29
Neil Chilson ⤴️⬆️🆙📈 🚀
Of course I believe liberalism is better than tyranny in the long run. I also think it's better in the short run. So that's an easy case. As are almost all the rest of the one you pointed out, because the tradeoffs are easy -- there are clear current benefits. What I am much more concerned about is a utilitarian calculation that concludes, "some oppression now is worth it to save many future lives."
English
1
0
1
29
Richard Y. Chappell🔸
Richard Y. Chappell🔸@RYChappell·
Decisions don't require anything in the vicinity of "certainty". But if you don't even think we can form reasonable *expectations* that tyranny is worse than liberalism in the long run, your grounds for opposition seem greatly weakened! There are hard cases and easy cases. I think we should confidently regard policies like nuclear non-proliferation, pandemic preparedness, opposition to authoritarian lock-in, etc., as robustly good for humanity's trajectory. The role of longtermism is mostly just a matter of getting the scale right: recognizing that getting these sorts of policies right may be *even more important* than saving individual lives. (When it's difficult to make a difference, one needs the higher potential payoff to make it a better bet than doing easier lower-stakes stuff like providing antimalarial bednets to the global poor.)
English
1
0
2
29
Neil Chilson ⤴️⬆️🆙📈 🚀
@RYChappell @mattyglesias I’m not denying that the future matters - it’s basically the only thing that does (can’t change the past!). I am, however, deeply skeptical that the effects of actions taken today can be predicted with any useful certainty 20 or 30 years out, let alone 1000.
English
1
0
5
87
Richard Y. Chappell🔸
Richard Y. Chappell🔸@RYChappell·
For a boundary dispute to be substantive rather than merely terminological, there must be a further property—distinct from the candidate underlying properties—for the rival views to aspire to track. Only then is “correspondence to reality” a matter of substantive success rather than semantic stipulation.
Richard Y. Chappell🔸 tweet media
English
1
0
2
190
Richard Y. Chappell🔸
Richard Y. Chappell🔸@RYChappell·
I like the principle, but too many referees are unwilling to give "accept" verdicts when personally unconvinced by an argument. (I've actually had a referee grant that I had an "ingenious" argument, but for very detailed reasons they remained "highly skeptical". This was at a journal that limits R&Rs, so they opted to reject a paper that they conceded was extremely interesting and worth engaging with in depth!) As long as referees are like this, we need R&Rs to give authors a chance to explain to mistrustful referees and editors why their concerns are misguided. Indeed, my radical view is to go the opposite direction and routinely give authors a chance to respond to referee comments before the editors reach a decision: philosophyetc.net/2020/07/what-i…
English
1
0
9
245
Richard Y. Chappell🔸
Richard Y. Chappell🔸@RYChappell·
It wasn't meant to be public though, was it? If he's just communicating honestly to his employees, that seems... good, in principle? Just unfortunate that it turned out there's at least one leaker that he can't trust. That's the kind of thing that forces most companies to stick to PR blather. I'd much rather have an employer who was honest and transparent internally than one who treated every internal communication as a public press release.
English
0
0
4
189
Clive Chan
Clive Chan@itsclivetime·
My issue with this is that out of all the things he could be doing to make things turn out OK, he decides to escalate with a bunch of ad hominem statements I would feel much safer about the trajectory of AGI if he had simply drawn the red line and declined to comment further
English
6
1
92
13.7K
Richard Y. Chappell🔸
Richard Y. Chappell🔸@RYChappell·
Most depressing objection to longtermism and animal suffering as cause areas?
Richard Y. Chappell🔸 tweet media
English
0
0
6
163
prerat
prerat@prerat·
who's building effective virtue ethics
English
12
0
68
2.8K
Richard Y. Chappell🔸
Richard Y. Chappell🔸@RYChappell·
@captgouda24 Reading his answer in reverse order, it sounds like: 1. They will defer to the DoD on legal matters, and not turn it off based on their own (disagreeing) judgment of the law. 2. They will turn it off if the DoD breaks the law (which, by 1, is determined by the DoD).
English
0
0
9
349
Nicholas Decker
Nicholas Decker@captgouda24·
So much of this comes down to whether we believe they have the mettle to do so! I appreciate that Mr. Altman is having to thread a narrow path, neither alienating the DoD while reassuring us. But it almost doesn't matter! I'm not sure how he could signal resolve without action.
Sam Altman@sama

@mcbyrne Yes, we will turn it off in that very unlikely event, but we believe the U.S. government is an institution that does its best to follow law and policy. What we won't do is turn it off because we disagree with a particular (legal military) decision. We trust their authority.

English
9
3
91
8.5K
Richard Y. Chappell🔸 รีทวีตแล้ว
Mikhail Samin
Mikhail Samin@Mihonarium·
> more guardrails than any previous agreement Bro lol do you think we can’t read? You literally permit the use of your AI for “all lawful purposes”, including for autonomous weapons and mass domestic surveillance, when applicable laws, regulations, and DoW policies allow it
Mikhail Samin tweet media
OpenAI@OpenAI

Yesterday we reached an agreement with the Department of War for deploying advanced AI systems in classified environments, which we requested they make available to all AI companies. We think our deployment has more guardrails than any previous agreement for classified AI deployments, including Anthropic's. Here's why: openai.com/index/our-agre…

English
2
7
109
3.3K
Richard Y. Chappell🔸 รีทวีตแล้ว
Joey Politano 🏳️‍🌈
Joey Politano 🏳️‍🌈@JosephPolitano·
the current position of the US government is that NVIDIA should be allowed to sell chips directly to China but banned from using Claude, because the latter is a larger national security risk. that is the level of absolute insanity coming out of the White House & Pentagon nowadays
English
86
1.8K
11.1K
541.1K