Dean W. Ball@deanwball
This is precisely the reason that I have been so supportive of open-source over the years. It is an insurance policy against both tyrannical governments and the death throes of the industrial-era nation state, the latter of which I fear we are living through.
Absent the oh-so-careful navigation of the narrow path (which I tried), it seemed inevitable that closed-source models would be captured by USG and thus subject to all of USG’s many mood swings, and thus intrinsically unreliable for both businesses and other countries. Alas, I did what I could to help us walk the narrow path, as did countless others. But the odds were always stacked against us.
The hybrid open/closed model world was a good future. It would mean that AI is an intrinsically profitable business, favoring Western and especially American capitalistic institutions. Yet the power of the closed labs would be checked by open-weight models that, while behind the cutting-edge, provided balance.
This supply chain risk designation will render closed-source models less attractive to many customers worldwide. The regulatory risk is now extremely high. USG cannot unfire the gun that it has fired—or if it can, doing so would take an extreme measure of skill, determination, and sensitivity that I do not think will be forthcoming.
Thus the balanced open/closed world seems less likely on the margin, and instead it is likelier that we live in the open-source-dominant world now.
This world probably favors China, because they have a political economy more favorable to large-scale subsidy of what is essentially digital public infrastructure. It also probably favors Chinese-style digital and physical surveillance (but made all the more pervasive and capable with advanced AI), since the catastrophic misuse risks of open source are higher. Chinese-style institutions have an overall structural advantage in this future, it seems to me, and Western institutions have a structural disadvantage. You can argue this has always been true, but USG just increased the likelihood of AI futures where the US is at an inherent disadvantage. And this is to say nothing of the damage to the business environment, the AI industry, etc., about which I have spoken earlier.
The future is likelier to be more decentralized, more confusing, and more dangerous than it was a few weeks ago. Yet it also may be brighter; it is probably a higher variance future than the steadier transition of the narrow path. Perhaps you like that trade off. Nonetheless, we have probably been knocked off the narrow path, and the odds of a “normal” transition to the era of machine intelligence are now meaningfully lower.
Humans are living through moving history once more.