
Tryfecta⏸️
2.7K posts




Here are some questions I wish "Pause" and "Stop" advocates would address: 1. Assuming we achieve the desired policy goal through a bilateral US/China agreement, what would be the specific metric or objective we would say needs to be satisfied in advance? Who decides whether we have satisfied them? What if one one party believes we have satisfied them but the other does not? 2. If the goal is achieved through a bilateral US/China agreement, would we need capital controls to ensure that U.S. investors cannot fund semiconductor fabs, data centers, or AI research labs in countries other than the U.S. and China? 3. Would we need to revoke the passports of U.S.-based AI researchers and semiconductor engineers to prevent them leaving America to join AI-related ventures elsewhere? How else would the U.S. and China keep researchers within their borders? 4. How should we grapple with the fact that (2) and (3) are common features of autocratic regimes? 5. Do the above questions mean that this really should be a global agreement, signed by all countries on Earth, or at least those with the theoretical ability to host large-scale data centers (probably Vanuatu doesn't need to be on board)?

Melania: The future of AI is personified. It will be formed in the shape of humans. Very soon, artificial intelligence will move from our mobile phones to humanoids that deliver utility. They fit well. Imagine a humanoid educator named Plato



"Our product will make you economically useless, and possibly kill you" is not a value proposition. AI leaders need to change their public messaging, and fast. noahpinion.blog/p/ai-has-the-w…


Pause AI rhetoric is predicated on the notion that the AI companies are recklessly racing toward dangerous tech and that a government controlled pause button is therefore necessary, but this seems really hard to reconcile with the fact that government is attempting to destroy an AI company because *the government* is racing toward plausibly dangerous AI uses (Sec. Hegseth has stated in official directives that he wants to deploy AI into critical systems regardless of whether it is aligned, for example) and *the company* is pushing back. The roles are totally reversed from the logic that Pause AI and frankly other AI safety advocates confidently assumed for years. It is *industry* that is in favor of alignment and at least somewhat measured deployment risks, and government whose actions seem much closer to reckless. I predicted this for years. I said, in particular, that pauses and bans and licensing regimes gave government a dangerously high degree of control over AI, and that the incentives of government are much more dangerous than those of private industry with competitive market incentives. I believe the events of the last month are good evidence in favor of my view. At this point if you are an AI safety advocate whose policy proposals do not wrestle seriously with the brutal political economic reality of the state and AI, I don’t take you seriously. It gives me no pleasure to have been right about this, by the way. The state has an incredibly strong structural incentive to centralize power using AI, and we are, all of us, not so empowered to stop it. I am quite concerned about this.

Pause AI rhetoric is predicated on the notion that the AI companies are recklessly racing toward dangerous tech and that a government controlled pause button is therefore necessary, but this seems really hard to reconcile with the fact that government is attempting to destroy an AI company because *the government* is racing toward plausibly dangerous AI uses (Sec. Hegseth has stated in official directives that he wants to deploy AI into critical systems regardless of whether it is aligned, for example) and *the company* is pushing back. The roles are totally reversed from the logic that Pause AI and frankly other AI safety advocates confidently assumed for years. It is *industry* that is in favor of alignment and at least somewhat measured deployment risks, and government whose actions seem much closer to reckless. I predicted this for years. I said, in particular, that pauses and bans and licensing regimes gave government a dangerously high degree of control over AI, and that the incentives of government are much more dangerous than those of private industry with competitive market incentives. I believe the events of the last month are good evidence in favor of my view. At this point if you are an AI safety advocate whose policy proposals do not wrestle seriously with the brutal political economic reality of the state and AI, I don’t take you seriously. It gives me no pleasure to have been right about this, by the way. The state has an incredibly strong structural incentive to centralize power using AI, and we are, all of us, not so empowered to stop it. I am quite concerned about this.

Pause AI rhetoric is predicated on the notion that the AI companies are recklessly racing toward dangerous tech and that a government controlled pause button is therefore necessary, but this seems really hard to reconcile with the fact that government is attempting to destroy an AI company because *the government* is racing toward plausibly dangerous AI uses (Sec. Hegseth has stated in official directives that he wants to deploy AI into critical systems regardless of whether it is aligned, for example) and *the company* is pushing back. The roles are totally reversed from the logic that Pause AI and frankly other AI safety advocates confidently assumed for years. It is *industry* that is in favor of alignment and at least somewhat measured deployment risks, and government whose actions seem much closer to reckless. I predicted this for years. I said, in particular, that pauses and bans and licensing regimes gave government a dangerously high degree of control over AI, and that the incentives of government are much more dangerous than those of private industry with competitive market incentives. I believe the events of the last month are good evidence in favor of my view. At this point if you are an AI safety advocate whose policy proposals do not wrestle seriously with the brutal political economic reality of the state and AI, I don’t take you seriously. It gives me no pleasure to have been right about this, by the way. The state has an incredibly strong structural incentive to centralize power using AI, and we are, all of us, not so empowered to stop it. I am quite concerned about this.













Agree. Strong government controls over AI should concern us more than market competition between AI companies. Even as we acknowledge that market competition between AI companies brings its own risks.






I spent a weekend at Stanford recently, which is where, in 2023, I did much of my formative thinking on AI. The Anthropic-DoW affair tested that early intellectual foundation more than anything, so found myself walking around Stanford, reflecting on what I learned in 2023.


