Air Force Won

294 posts

Air Force Won

Air Force Won

@air_force_won

burner account of someone interested in politics, neuroscience, and AI

Katılım Mayıs 2024
223 Takip Edilen12 Takipçiler
Air Force Won
Air Force Won@air_force_won·
@jachiam0 I’m glad it seems like you’re engaging in good faith. I’m looking forward to seeing the responses!
English
0
0
3
385
Joshua Achiam
Joshua Achiam@jachiam0·
I'm going to make a request for some basics from the Pause folks: please outline a practicable version of a pause. Do you mean no training runs above a certain scale? Do you mean furlough the researchers indefinitely? What are you specifically asking for?
David Krueger@DavidSKrueger

A week from today, we will be at Anthropic, OpenAI, and xAI, demanding that leaders agree to a conditional AI pause. These companies are recklessly endangering all of our lives. Their excuse is that they can't pause unilaterally. So they must commit to pausing if others do.

English
41
3
200
44.1K
Richard Ngo
Richard Ngo@RichardMCNgo·
And I think Trump is better summarized as chaotic + provocative than actively authoritarian. He said wild stuff in 2016 but almost all of his subsequent term was uneventful. And I expect his new coalition will rein in serious misbehavior, similar to how his last coalition did.
English
19
2
145
29.1K
FIRE
FIRE@TheFIREorg·
Brendan Carr’s authoritarian warning — that networks risk their broadcasting licenses for Iran war reporting that the government doesn’t like — is outrageous. When the government demands the press become a state mouthpiece under the threat of punishment, something has gone very wrong. In 2019, Carr said: “Should the government censor speech it doesn’t like? Of course not. The FCC does not have a roving mandate to police speech in the name of the ‘public interest.’” But today, Carr cites the “public interest” to blatantly threaten news outlets because the president doesn’t like their reporting. Again and again, Carr’s tenure as FCC chairman has been marked by his shameless willingness to bully and threaten our free press. But even by Carr’s standards, today’s hypocrisy is shocking — and dangerous. The American people demand uncensored news about the men and women serving in our armed forces. Our right to a free press is one of the core American freedoms those in uniform have sworn to support and defend. It is long past time for our government officials to remember their own oaths to uphold the Constitution — starting with the First Amendment.
Brendan Carr@BrendanCarrFCC

Broadcasters that are running hoaxes and news distortions - also known as the fake news - have a chance now to correct course before their license renewals come up. The law is clear. Broadcasters must operate in the public interest, and they will lose their licenses if they do not. And frankly, changing course is in their own business interests since trust in legacy media has now fallen to an all time low of just 9% and are ratings disasters. The American people have subsidized broadcasters to the tune of billions of dollars by providing free access to the nation’s airwaves. It is very important to bring trust back into media, which has earned itself the label of fake news. When a political candidate is able to win a landslide election victory after in the face of hoaxes and distortions, there is something very wrong. It means the public has lost faith and confidence in the media. And we can’t allow that to happen. Time for change!

English
187
1.7K
5.8K
676.8K
Air Force Won
Air Force Won@air_force_won·
@xuanalogue Do you think a lull is worthwhile, and it's just a matter of timing when would be the right moment? Cause I would assume a lull (if one thinks it's a good idea) would be best prior to ASI, but perhaps I'm misunderstanding and/or you think ASI is more than a decade away
English
0
0
0
16
xuan (ɕɥɛn / sh-yen)
xuan (ɕɥɛn / sh-yen)@xuanalogue·
Perhaps it's worth the slowdown if you really think something like super-intelligence will arrive in 2-5 years *and* the lull will be productively used to ensure safety, but as someone who thinks there's decades of AI progress ahead, I think the calculus is far from obvious.
English
3
0
20
1.1K
xuan (ɕɥɛn / sh-yen)
xuan (ɕɥɛn / sh-yen)@xuanalogue·
Opinion held w uncertainty, but I think data center moratorium advocates are going to be surprised when it accelerates R&D into AI systems that are far more data + energy + compute-efficient than today's systems, and far less subject to central political control.
English
6
3
55
4.7K
Samuel Hammond 🦉
Samuel Hammond 🦉@hamandcheese·
The AI buildout is going about as fast as it can under the circumstances. Institutional nothing-ever-happens ism has been irreversibly broken. History is alive again and the US has an actual executive function for the first time in decades. Regulation is being slashed en bloc. The FDA is embracing bayesianism and IRB reform. The NIH and NSF are being overhauled. Nuclear energy is back. Conflicts are being resolved left and right, and the people of Venezuela may now even get a real taste at freedom. Can't complain. Even the area I've been most critical (approving chip exports to China) has not yet actually materialized, in part because the BIS rule makes license approvals conditional on not displacing US customers. Likewise, I've been very critical of the april 2 tariffs, but they've been less bad than I anticipated (mainly due to walk-backs) and actually quite effective at extracting security and investment commitments.
English
7
0
42
8.8K
Joey Politano 🏳️‍🌈
Joey Politano 🏳️‍🌈@JosephPolitano·
what I can’t stand most about it people like Sam who hard-pivoted into Trumpism is they treat the rest of us like we are incapable of remembering anything that happened pre-2024. guy who now supports mass deportations wants you to forget when he was an open borders libertarian
Joey Politano 🏳️‍🌈 tweet mediaJoey Politano 🏳️‍🌈 tweet mediaJoey Politano 🏳️‍🌈 tweet mediaJoey Politano 🏳️‍🌈 tweet media
Samuel Hammond 🦉@hamandcheese

@JosephPolitano When was I a turbo free trade open borders guy? I've always been critical of cosmopolitan ethical theories for being detached from the institutional forms that constitute normative commitments, particularly nationstates. Just one example -- nationalaffairs.com/publications/d…

English
7
47
685
65.4K
Air Force Won
Air Force Won@air_force_won·
I find it instructive how people psychologically come to grips with AI's power. A running thread with examples
English
1
0
1
53
Air Force Won
Air Force Won@air_force_won·
@tysonbrody Is the solution for Dems to always have a big name file in every somewhat competitive Senate (and Congress) race, just in case a party pulls this trick?
English
0
0
3
1.2K
Julian
Julian@mealreplacer·
Stop scrolling! You’ve been visited by esteemed philosopher Robert Long. Comment “good evening Robert” if you are currently having a subjective experience.
Julian tweet media
English
112
5
241
119.1K
Air Force Won
Air Force Won@air_force_won·
@yonashav Are all of our outcomes especially correlated if some billionaires can hide in bunkers or fly into space? I’m reminded of Don’t Look Up
English
0
0
0
1.1K
Yo Shavit
Yo Shavit@yonashav·
I just want to remind everyone that we do actually all have to ride through takeoff together, all parties will remember how other parties treated them, and this is far from the last repetition of this game. (If I had to guess, this is like round 2 of ~30.)
English
16
12
301
31.4K
Citrini
Citrini@Citrini7·
@ContrarianCurse Just a reminder: NVDA is going to skull fuck earnings regardless of how high memory goes and every day it goes down is a better entry for when that happens.
English
9
4
319
31.5K
SuspendedCap
SuspendedCap@ContrarianCurse·
Saw someone post today “somehow we are in a bear market for AI!” Cause nvidia is off 15% It’s because of memory. I’ve tweeted it many times. This raises capex budgets across the board. It raises opex via cloud costs for many companies in a material way. It absolutely smokes consumer electronics I literally tweeted “wonder how long it’ll take the market to realize $50 MU earnings fucks the entire hardware ecosystem” Well it very well could be happening
English
15
10
306
50.8K
Air Force Won
Air Force Won@air_force_won·
@robbensinger I think part of the reason so may EA’s in particular are susceptible to the “room where it happens” temptation, which you’re spot on about, is that many are competitive and status seeking and likely became EA’s because it was a way to feel superior to others who are “less moral”
English
0
0
3
145
Rob Bensinger ⏹️
Rob Bensinger ⏹️@robbensinger·
I've generally been pretty skeptical of criticisms of EA like "'earning to give' is a risky meme to spread because it might encourage people to do unethical things for money" (because EAs were aware of this concern and warned about it a bunch, and because at the time it struck me as pretty rare for the best money-making options to be really destructive and commons-burning) or "SBF is symptomatic of a pervasive rot in EA" (I felt that an important subset of EA leadership did a shit job at noticing and propagating red flags about SBF's character, but I had my own model of typical EAs' strengths and weaknesses, and SBF struck me as a weird case in many ways). But I notice myself slowly coming around as I observe the dynamics at AI labs. Like, I feel like I might have made better inside-view predictions about Anthropic and OpenAI if I'd done more "naively assume that lots of EA-ish people are similar to SBF and his sphere": - prone to rationalizing unethical and harmful behavior, like promise-breaking and deception, based on pretty shallow utilitarian reasoning - comfortable with crazy, out-of-distribution levels of risk-taking - willing to impose huge externalities on others, without asking their consent - fixated on power / influence / status / being in the room where it happens. I think when pressed, people often had bad reasons for thinking these criticisms applied to all that many non-SBF EAs. But unfortunately, the existence of a bad argument for X isn't strong Bayesian evidence against X. And it's easy to get polarized against the truth when you hear enough bad arguments for it, day-in day-out.
Oliver Habryka@ohabryka

I would be pretty surprised if the employees in-question here end up saying they were deceived. Also, these are high-level enough employees that it's unclear what it even means for them to be "deceived". Deceived by whom? They drafted the RSP! They almost certainly were also involved in the decision to change it. They benefitted hugely from this by getting social license to work at Anthropic and having people get off their back, and they are now at least deca-millionaires (or often billionaires).

English
11
8
153
14.6K
Air Force Won
Air Force Won@air_force_won·
@CharlieBul58993 He’s been conspicuously silent on the chips export deal (as well as the UAE quid pro quo), silent as in hasn’t once talked about it publicly. That silence makes me question his integrity
English
0
0
0
49
Charlie Bullock
Charlie Bullock@CharlieBull0ck·
@air_force_won I’m not subtweeting Dean here, Dean’s takes on this issue have been good & consistent
English
1
0
0
117
Alasdair Phillips-Robins
Alasdair Phillips-Robins@alasdairpr·
@AdamThierer Come on, "open-ended ability to regulate AI via executive decrees"? The Biden admin used the DPA to survey AI companies about large compute clusters and frontier models; it didn't create any substantive rules affecting their conduct.
English
2
1
8
390
Adam Thierer
Adam Thierer@AdamThierer·
The Biden Admin argued that the Defense Production Act (DPA) gave them the open-ended ability to regulate AI via executive decrees, and now the Trump Admin is using the DPA to threaten private AI labs with quasi-nationalization for not being in line with their wishes. In both cases, it's an abuse of authority. As I noted in congressional testimony two years ago, we have flipped the DPA on its head "and converted a 1950s law meant to encourage production, into an expansive regulatory edict intended to curtail some forms of algorithmic innovation." This nonsense needs to end regardless of which administration is doing it. The DPA is not some sort of blanket authorization for expansive technocratic reordering of markets or government takeover of sectors. Congress needs to step up to both tighten up the DPA such that it cannot be abused like this, and then also legislate more broadly on a national policy framework for AI.
Dean W. Ball@deanwball

We should be extremely clear about various red lines as we approach and/or cross them. We just got close to one of the biggest ones, and we could cross it as soon as a few days from now: the quasi-nationalization of a frontier lab. Of course, we don’t exactly call it that. The legal phraseology for the line we are approaching is “the invocation of the Defense Production Act (DPA) Title I on a frontier AI lab.” What is the DPA? It’s a Cold War era industrial policy and emergency powers law. Its most commonly used power is Title III, used for traditional industrial policy (price guarantees, grants, loans, loan guarantees, etc.). There is also Title VII, which is used to compel information from companies. This is how the Biden AI Executive Order compelled disclosure of certain information from frontier labs. I only mention these other titles to say that not all uses of the DPA are equal. Title I, on the other hand, comes closer to government exerting direct command over the economy. Within Title I there are two important authorities: priorities and allocations. Priorities authority means the government can put itself at the front of the line for arbitrary goods. Allocations authority is the ability of the government to directly command the production of industrial goods. Think, “Factory X must make Y amount of Z goods.” The government determines who gets what and how much of it they get. This is a more straightforwardly Soviet power, and it is very rarely used. This is the power DoD intends to use in order to command Anthropic to make a version of Claude that can choose to kill people without any human oversight. What would this commandeering look like, in practice? It would likely mean DoD personnel embedded within Anthropic exercising deep involvement over technical decisions on alignment, safeguards, model training, etc. Allocations authority was used most recently during COVID for ventilators and PPE, and before that during the Cold War. It is usually used during acute emergencies with reasonably clear end states. But there is no emergency with Anthropic, save for the omni-mergency that characterizes the political economy of post-9/11 U.S. federal policy. There’s no acute crisis whose resolution would mean the Pentagon would stop commandeering Anthropic’s resources. That is why I believe that in the end this would amount to quasi-nationalization of a frontier lab. It’s important to be clear-eyed that this is what is now on the table. The Biden Administration would probably have ended up nationalizing the labs, too. Indeed, they laid the groundwork for this in terms one. I discussed this at the time with fellow conservatives and I warned them: “This drive toward AI lab nationalization is a structural dynamic. Administrations of both parties will want to do this eventually, and resisting this will be one of the central challenges in the preservation of our liberty.” I am unhappy, but unsurprised, that my fear has come true, though there is a rich irony to the fact that the first administration to invoke the prospect of lab nationalization is also one that understands itself to have a radically anti-regulatory AI policy agenda. History is written by Shakespeare! There is a silver lining here: if Democrats had originated this idea, it would have been harder to argue against, because of the overwhelming benefit of the doubt conventionally extended to the left in our media, and because a hypothetical Biden II or Harris admin would done it in a carefully thought through way. So it is convenient, if you oppose nationalization, that it’s a Republican administration that first raised the issue—since conventional elite opinion and media will be primed against it by default— and that the administration is raising it in such an non-photogenic manner. This Anthropic thing may fizzle, and some will say I am overreacting. But this Anthropic thing may also *not* fizzle, and regardless this issue is not going away.

English
8
38
174
29.8K
hypersodium
hypersodium@hypersodium·
@deanwball "A private company regulating the military’s use of AI also doesn’t sound quite right!" This is a critical misunderstanding of the issue, Anthropic isn't seeking to regulate the military's use of AI broadly, but maintain extant contractual restrictions on the use of their model.
English
3
0
48
2.2K
Dean W. Ball
Dean W. Ball@deanwball·
A primer on the Anthropic/DoD situation: DoD and Anthropic have a contract to use Claude in classified settings. Right now Anthropic is the only AI company whose models work in classified contexts. The existing contract, signed by both parties and in effect, prohibits two uses of Anthropic’s models by the military: 1. Surveillance of Americans in the United States (as opposed to Americans abroad). 2. The use of Claude in autonomous lethal weapons, which are weapons that can autonomously identify, track, and kill a human with no human oversight or approval. Autonomous killing of humans by machines. On (2), Anthropic CEO Dario Amodei’s public position is essentially that autonomous lethal weapons controlled by frontier AI will be essential faster than most people realize, but that the models aren’t ready for this *today.* For Anthropic, these things seem to be a matter of principle. It’s worth noting that when I speak with researchers at other frontier labs, their principles on this are similar, if not often stricter. For DoD, however, there is another matter of principle: the military’s use of technology should only ever be constrained by the Constitution or the laws of the United States. One could quibble (the government enters into contracts, like anyone else), but the principle makes sense. A private company regulating the military’s use of AI also doesn’t sound quite right! So, the military has three options: 1. They could cancel Anthropic’s contract and find some other frontier lab (ideally several) to work with. 2. They could identify Anthropic a supply chain risk, which would ban all other DoD suppliers (I.e.: a large fraction of the publicly traded firms in America) from using Anthropic in their fulfillment of DoD contracts. This is a power used only for foreign adversary companies as far as I know. Activating this power would cost Anthropic a lot of business—potentially quite a lot—and give investors huge skepticism about whether the company is worth funding for the next round of scaling. Capital was a major constraint anyway, but this makes it much harder. This option could be existential for Anthropic. 3. They could activate Title I of the Defense Production Act, an authority intended for command-and-control of the economy during wars and emergencies. This is really legally murky, and without going into detail, I feel reasonably confident this would backfire for the administration, resulting in courts limiting the use of the DPA. Option 1 is obviously the best. This isn’t even close, and I say this as someone who shares DoD’s principled concerns about the control by private firms over the military’s use of technology. Even the threats do damage to the US business environment, and rightfully so: these are the strictest regulations of AI being considered by any government on Earth, and it all comes from an administration that bills itself (and legitimately has been) deeply anti-AI-regulation. Such is life. One man’s regulation is another man’s national security necessity.
English
95
141
1.1K
263.6K
Joe Carlsmith
Joe Carlsmith@jkcarlsmith·
.@AmandaAskell and I are recording an audio version of Claude’s Constitution, and we’re planning to include an additional section where we answer some questions about the document. If you have questions you’re especially curious about, feel free to drop them in the replies.
English
157
35
639
53.1K
Richard Ngo
Richard Ngo@RichardMCNgo·
This is primarily a problem with the EA-affiliated side of AI safety. Unfortunately, that’s most of the field by now. EAs don’t have memetic defenses against conflating “do the most good” with “gain the most power” (or sometimes just “be in the room where it happens”).
English
4
7
193
8.9K
Anton Leicht
Anton Leicht@anton_d_leicht·
@yonashav could always just have all the nations take stakes in agi labs
English
1
0
0
57
Yo Shavit
Yo Shavit@yonashav·
UBI is a ploy to prop up agi lab TAM
English
4
0
21
2.8K