Richard Y. Chappell🔸

5.7K posts

Richard Y. Chappell🔸

Richard Y. Chappell🔸

@RYChappell

Academic Philosopher. Posts better stuff at https://t.co/jwkU1JxzCj 🔸10% Pledge #54 with @GivingWhatWeCan

Miami, FL Katılım Eylül 2011
251 Takip Edilen1.9K Takipçiler
Jeremy Pierce
Jeremy Pierce@TheParableMan·
@RYChappell In one sense all arguments are question-begging, because they assume premises that the other side might not grant. Jose Benardete pointed this out in class one time when I was a student. I think you're making the same point.
English
1
0
1
55
Richard Y. Chappell🔸
Richard Y. Chappell🔸@RYChappell·
I agree there's overlap between the movements. But there's still a difference between peripheral associates and paradigm representatives of a view, and it's worth being clear about which is which! Even more importantly, it's worth correctly identifying what core reasons underlie various disagreements. It makes a difference whether the crux of your disagreement concerns empirical p(doom) estimates or cluelessness worries or liberal vs authoritarian political philosophies or moral disagreements about how much future generations matter in principle, etc.
English
0
0
2
29
Neil Chilson ⤴️⬆️🆙📈 🚀
X risk is an EA longtermist concept, AI x risk grew out of that community, yud is early in that network too. I’m sure you can “no true EA” your way out of any association and maybe that’s a fun academic exercise, but the political reality is that EA and AI x-risk are highly overlapping communities. I will treat them as such for now. I’m sure I could learn a lot more from you and I look forward to it in the future.
English
1
0
1
44
Richard Y. Chappell🔸
Richard Y. Chappell🔸@RYChappell·
I don't think Yudkowsky is an EA or a longtermist? (He thinks AGI is overwhelmingly likely to soon kill everyone who currently exists. Any moral theory combined with that empirical belief is presumably going to have some extreme implications.) And afaict, even his position doesn't seem to rest on naive instrumentalism but rather something more heuristic like "treating AI like nuclear weapons". I don't think that enforcing nuclear non-proliferation treaties is well characterized as "oppressive". If the Pause folks are mistaken, it's due to excessive precaution (like the anti-nuclear energy folks) -- kind of the opposite of a recklessly naive approach to cost-benefit analysis! So I think your diagnostics are way off, here. (It's fine to disagree on the policy merits with the various folks you mention, of course. I disagree with them myself! I just don't think you've charitably identified the reasoning that underlies their view.)
English
1
0
2
29
Richard Y. Chappell🔸
Richard Y. Chappell🔸@RYChappell·
@neil_chilson @mattyglesias I agree that sort of naive instrumentalist calculation is bad. That's why one should oppose naive instrumentalism (a specific and clearly daft kind of decision procedure)! It's not like "some oppression now is worth it to save many current lives" would be any better.
English
1
0
1
29
Neil Chilson ⤴️⬆️🆙📈 🚀
Of course I believe liberalism is better than tyranny in the long run. I also think it's better in the short run. So that's an easy case. As are almost all the rest of the one you pointed out, because the tradeoffs are easy -- there are clear current benefits. What I am much more concerned about is a utilitarian calculation that concludes, "some oppression now is worth it to save many future lives."
English
1
0
1
29
Richard Y. Chappell🔸
Richard Y. Chappell🔸@RYChappell·
Decisions don't require anything in the vicinity of "certainty". But if you don't even think we can form reasonable *expectations* that tyranny is worse than liberalism in the long run, your grounds for opposition seem greatly weakened! There are hard cases and easy cases. I think we should confidently regard policies like nuclear non-proliferation, pandemic preparedness, opposition to authoritarian lock-in, etc., as robustly good for humanity's trajectory. The role of longtermism is mostly just a matter of getting the scale right: recognizing that getting these sorts of policies right may be *even more important* than saving individual lives. (When it's difficult to make a difference, one needs the higher potential payoff to make it a better bet than doing easier lower-stakes stuff like providing antimalarial bednets to the global poor.)
English
1
0
2
29
Neil Chilson ⤴️⬆️🆙📈 🚀
@RYChappell @mattyglesias I’m not denying that the future matters - it’s basically the only thing that does (can’t change the past!). I am, however, deeply skeptical that the effects of actions taken today can be predicted with any useful certainty 20 or 30 years out, let alone 1000.
English
1
0
5
87
Richard Y. Chappell🔸
Richard Y. Chappell🔸@RYChappell·
For a boundary dispute to be substantive rather than merely terminological, there must be a further property—distinct from the candidate underlying properties—for the rival views to aspire to track. Only then is “correspondence to reality” a matter of substantive success rather than semantic stipulation.
Richard Y. Chappell🔸 tweet media
English
1
0
2
190
Richard Y. Chappell🔸
Richard Y. Chappell🔸@RYChappell·
I like the principle, but too many referees are unwilling to give "accept" verdicts when personally unconvinced by an argument. (I've actually had a referee grant that I had an "ingenious" argument, but for very detailed reasons they remained "highly skeptical". This was at a journal that limits R&Rs, so they opted to reject a paper that they conceded was extremely interesting and worth engaging with in depth!) As long as referees are like this, we need R&Rs to give authors a chance to explain to mistrustful referees and editors why their concerns are misguided. Indeed, my radical view is to go the opposite direction and routinely give authors a chance to respond to referee comments before the editors reach a decision: philosophyetc.net/2020/07/what-i…
English
1
0
9
245
Richard Y. Chappell🔸
Richard Y. Chappell🔸@RYChappell·
It wasn't meant to be public though, was it? If he's just communicating honestly to his employees, that seems... good, in principle? Just unfortunate that it turned out there's at least one leaker that he can't trust. That's the kind of thing that forces most companies to stick to PR blather. I'd much rather have an employer who was honest and transparent internally than one who treated every internal communication as a public press release.
English
0
0
4
189
Clive Chan
Clive Chan@itsclivetime·
My issue with this is that out of all the things he could be doing to make things turn out OK, he decides to escalate with a bunch of ad hominem statements I would feel much safer about the trajectory of AGI if he had simply drawn the red line and declined to comment further
English
6
1
92
13.7K
Richard Y. Chappell🔸
Richard Y. Chappell🔸@RYChappell·
Most depressing objection to longtermism and animal suffering as cause areas?
Richard Y. Chappell🔸 tweet media
English
0
0
6
163
prerat
prerat@prerat·
who's building effective virtue ethics
English
12
0
68
2.8K
Richard Y. Chappell🔸
Richard Y. Chappell🔸@RYChappell·
@captgouda24 Reading his answer in reverse order, it sounds like: 1. They will defer to the DoD on legal matters, and not turn it off based on their own (disagreeing) judgment of the law. 2. They will turn it off if the DoD breaks the law (which, by 1, is determined by the DoD).
English
0
0
9
349
Nicholas Decker
Nicholas Decker@captgouda24·
So much of this comes down to whether we believe they have the mettle to do so! I appreciate that Mr. Altman is having to thread a narrow path, neither alienating the DoD while reassuring us. But it almost doesn't matter! I'm not sure how he could signal resolve without action.
Sam Altman@sama

@mcbyrne Yes, we will turn it off in that very unlikely event, but we believe the U.S. government is an institution that does its best to follow law and policy. What we won't do is turn it off because we disagree with a particular (legal military) decision. We trust their authority.

English
9
3
91
8.5K
Richard Y. Chappell🔸 retweetledi
Mikhail Samin
Mikhail Samin@Mihonarium·
> more guardrails than any previous agreement Bro lol do you think we can’t read? You literally permit the use of your AI for “all lawful purposes”, including for autonomous weapons and mass domestic surveillance, when applicable laws, regulations, and DoW policies allow it
Mikhail Samin tweet media
OpenAI@OpenAI

Yesterday we reached an agreement with the Department of War for deploying advanced AI systems in classified environments, which we requested they make available to all AI companies. We think our deployment has more guardrails than any previous agreement for classified AI deployments, including Anthropic's. Here's why: openai.com/index/our-agre…

English
2
7
108
3.3K
Richard Y. Chappell🔸 retweetledi
Joey Politano 🏳️‍🌈
Joey Politano 🏳️‍🌈@JosephPolitano·
the current position of the US government is that NVIDIA should be allowed to sell chips directly to China but banned from using Claude, because the latter is a larger national security risk. that is the level of absolute insanity coming out of the White House & Pentagon nowadays
English
86
1.8K
11.1K
541K
Richard Y. Chappell🔸 retweetledi
Dean W. Ball
Dean W. Ball@deanwball·
Think about the power Hegseth is asserting here. He is claiming that the DoD can force all contractors to stop doing business of any kind with arbitrary other companies. In other words, every operating system vendor, every manufacturer of hardware, every hyperscaler, every type of firm the DoD contracts with—all their services and products can be denied to any economic actor at will by the Secretary of War. This is obviously a psychotic power grab. It is almost surely illegal, but the message it sends is that the United States Government is a completely unreliable partner for any kind of business. The damage done to our business environment is profound. No amount of deregulatory vibes sent by this administration matters compared to this arson.
Secretary of War Pete Hegseth@SecWar

This week, Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon. Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic. Instead, @AnthropicAI and its CEO @DarioAmodei, have chosen duplicity. Cloaked in the sanctimonious rhetoric of “effective altruism,” they have attempted to strong-arm the United States military into submission - a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives. The Terms of Service of Anthropic’s defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield. Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable. As President Trump stated on Truth Social, the Commander-in-Chief and the American people alone will determine the destiny of our armed forces, not unelected tech executives. Anthropic’s stance is fundamentally incompatible with American principles. Their relationship with the United States Armed Forces and the Federal Government has therefore been permanently altered. In conjunction with the President's directive for the Federal Government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service. America’s warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final.

English
546
2.8K
13.6K
1.2M
Richard Y. Chappell🔸
Richard Y. Chappell🔸@RYChappell·
A claim I find interesting and underexplored: either consequentialism is correct or morality is lamentable and beneficent motivations should rationally lead us to coordinate against it. My latest post explores how to make sense of the latter disjunct. goodthoughts.blog/p/replacing-un…
Richard Y. Chappell🔸 tweet media
English
0
0
2
218
Richard Y. Chappell🔸
Richard Y. Chappell🔸@RYChappell·
To distinguish their view, non-consequentialists should be able to point to norms that they endorse even though they make welfare subjects overall worse-off. But then shouldn’t we prefer alternative norms that are better for us (collectively)? goodthoughts.blog/p/why-care-abo…
English
0
0
3
204