Dean W. Ball@deanwball
First of all, your organization absolutely does not represent the views of everyone who wants to "pause" or "stop" AI, so it is false to say that I "misunderstood" your proposal. I was not referring to your specific proposal in any of my tweets.
Second, I do not agree with you that any of this is "likely" to "end" civilization, exterminate humanity, etc. etc. I certainly do believe there are major risks, and I have spent years developing and advocating for policies I believe would address some of those risks. There are other risks about which I have substantial uncertainty, and I have been consistently straightforward about this uncertainty. You, on the other hand, seem to have near-certainty about every bad AI outcome, so long as it gets people on your side. This, in my mind, diminishes your credibility.
Third, you claim to want an international treaty (enforced by whom?) signed presumably by a broader group of countries than just the U.S. and China (which ones?) that would ban AI development until some future point (how would we define that point?). My observation is that any such treaty would have to involve capital controls and controls on the free movement of people, unless all countries on Earth with the ability to host large-scale data centers (i.e., almost all of them) were signatories. You claim it is merely a compute governance regime you desire, but what happens when the AI researchers and semiconductor designers and manufacturing engineers are given 9-figure offers to move to God-knows-where to work on a new AI or semiconductor venture? Does the government simply allow that flow of money and people to occur?
No. So what you *really* want is a regime that controls (optimally stops) the flow of trillions of dollars in good (all advanced AI compute + all semiconductor manufacturing equipment, as long as it is in service of making advanced AI compute, and by the way, how would you tell the difference between a fab making smartphone chips and GPUs? Inspectors in the fabs? Who supplies the inspectors?), all AI researchers, and all investment dollars that could be tied to AI research or to computing power (what about quantum machine learning, by the way? neuromorphic computing? other new paradigms?).
It was actually charitable of me to assume that you'd want this to be enforced by e.g. existing export control regimes within a country. But it seems like you are saying, no, we wouldn't have e.g BIS or MOFCOM do this, we'd have a new international body with "democratic control" (a globally elected president of AI? who runs the elections? are their campaigns? who is allowed to donate to said campaigns) staffed with thousands of people, with a budget easily in the billions, with sweeping power to control flows of goods, people and money that fundamentally implicates ancient principles of national sovereignty.
And you're doing all this at a time when international institutions of governance and international collaboration in general has been fraying, to say the least.
All of this makes me think you are biting off much more than you can chew, to be sure. I don't want to accuse you of desiring authoritarianism, because I truthfully don't know whether you even understand what it is that you are advocating for. It does not seem based on your response to me that you have really thought about what implementation would require here, and so my guess is that your proposal is not malicious or desirous of tyranny, but instead simply naive and incomplete. And by the way, the hand waviness of your policy prescriptions is nothing compared to the hand waviness of your understanding of artificial intelligence, its likely trajectory, and relevant threat models. You don't even seem to feel a need to explain your position (I myself just wrote 5k words explaining my views on existential risk, and have written many thousands on the threat models I take more seriously), indicating to me that you live in a bubble where the hard-nosed and concrete questions do not get asked.
You suggest that I am unserious, interested as I am in the "interesting governance challenges" you seem to dismiss in comparison to what you seem sure is your focus on the "big picture." But do you know what my work has produced? Enacted laws. Dozens of policies being executed as we speak by the largest bureaucracy in the history of mankind. Ideas that have shifted the thinking of people whose decisions will matter. Is it everything? No, it is not. My contributions will ultimately be small. But I put my back into what I do, when the logical move for someone like me would have been to go take a cushy job in the industry after I left the government. Do not ever suggest to me that I do not care deeply about what I am doing and do not ever question the intellectual integrity of a person you do not know. At the very least, do not do it to me.