Internet Governance at Georgia Tech

3.9K posts

Internet Governance at Georgia Tech banner
Internet Governance at Georgia Tech

Internet Governance at Georgia Tech

@IGPAlert

Updates from the Internet Governance Project on global governance of the digital ecosystem

Atlanta, GA शामिल हुए Nisan 2009
691 फ़ॉलोइंग4.1K फ़ॉलोवर्स
Dwarkesh Patel
Dwarkesh Patel@dwarkesh_sp·
Gutenberg invented the most important technology of the millennium and immediately went bankrupt — and so did the bank that foreclosed on him, and so did his apprentices. Gutenberg could make a batch of 300 books for the cost of one, but there weren't enough buyers in his small, landlocked village in Germany. It it took the better part of a century of further innovations, social changes, and setting up of distribution networks before you could have a pamphlet like Luther's 95 thesis get from Wittenberg to London in 17 days.
English
313
1.8K
10.4K
1.2M
Internet Governance at Georgia Tech
Much of the current media coverage frames the Anthropic-DoD dispute as an ethical conflict: a reckless Pentagon trying to weaponize AI versus a principled company standing firm on responsible use. While not entirely wrong, this framing it misses what is most important. internetgovernance.org/2026/03/08/wha…
English
0
0
0
88
Alan Rozenshtein
Alan Rozenshtein@ARozenshtein·
The current AI debate badly needs to separate three distinct questions: (1) To what extent should companies be able to restrict the government from using their systems? This is a very hard question and where my instincts actually lie on the government side (though I very much do not trust this government to limit itself to “all lawful uses”). (2) Should the government seek to punish and even destroy a company that tries to impose restrictive usage terms (rather than simply not do business with that company)? The answer seems obviously “no.” (3) To what extent does any particular company “redline” actually constrain the government? E.g., based on OpenAI’s description of its contract with DOD, in my view it is not particularly constraining.
English
17
27
212
31.3K
Internet Governance at Georgia Tech रीट्वीट किया
Jyoti
Jyoti@pandayjyoti·
India’s internet is far more restricted than officially acknowledged. This is the largest study of #DNScensorship in #India till date, both in terms of test list coverage as well as the size of blocklist. Monumental work by .@Squeal
Karan Saini@Squeal

Excited to share “Poisoned Wells,” which presents the largest point-in-time study of website blocking in India to date. I tested the blocking of 294 million apex domains across six Indian ISPs, sending 1.76 billion DNS queries in total.

English
8
190
613
19.7K
Internet Governance at Georgia Tech रीट्वीट किया
Dean W. Ball
Dean W. Ball@deanwball·
As I have said since the beginning, this is about the principle of the thing for both parties. Anthropic is saying private firms should be able to set the terms on which they offer products and services to the government. USG is saying no, private firms may not set terms of use. In other words, the USG is saying that companies who provide services to the military are not quite “contractors” but instead assets to be deployed at will by the government, only to be constrained by the government’s interpretation of the law. There is no difference in principle from the government saying “we unilaterally dictate the price of every service and product we procure.” After all, price is just another term in a contract. This is why the government’s stance has a certain appeal, but is ultimately conceptually incoherent and a fundamental departure from the principles of ordered liberty that you and I both share. The “coloring within the lines of our republic” response is for the government to say, fine, we won’t give you business and we will give business to your competitors. Perhaps even to make a public stink about it. I don’t know a single person who objects to that or thinks it’s illegitimate for the government to do. But instead what the government is doing is trying to destroy Anthropic, using policy measures reserved only for foreign adversaries. This is obviously a different-in-kind response, and all principled classical liberals should reject it outright. This is not hard, or at least it should not be. In short: You are focusing on the wrong thing. Of course DoW is free to have a principle that they will accept no limitations on their use of technology. The problem is that their policy response is not just doing that, but instead attacking the basic principles of private property: that people have the right to set the terms of their engagement with the government.
English
25
71
633
40.7K
Internet Governance at Georgia Tech रीट्वीट किया
Jimmy and Rosalynn Carter School of Public Policy
Voices from academia, policy, industry and civil society shared insights on AI and geopolitics, international trade, innovation and more at Internet Governance Project's session today on AI governance and global economic development
Jimmy and Rosalynn Carter School of Public Policy tweet media
English
0
1
1
132
Internet Governance at Georgia Tech
If by "purely libertarian" you mean Rothbardian anarcho-capitalism then we've all outgrown it. The dream of a world without any state, in any form, is as unrealistic as a Marxian world without any markets. That's why we promote political economy - there's a role for both, but it's a evolving, dynamic relationship that has to be figured out
English
0
0
0
123
Dean W. Ball
Dean W. Ball@deanwball·
I think Charlie is right. There is a difference between “classical liberal techno-optimist,” “techno-libertarian,” and “tacitly implied techno-anarchist.” And this really has only a little to do with present-day arguments about whether laws like SB 53 are prudent. Instead, consider the grand sweep of the coming decades: arbitrarily powerful bioengineering, artificial general intelligence, nanotechnology, space colonization, brain-computer interfaces, and the like. It’s hard for me to imagine these things being built and adopted without politics and policy playing a role. It is both inevitable (if rich kids can buy therapies that literally make their newborns smarter than they otherwise would be, you can bet that will be political) and desirable (in the same way that modern financial services are essentially impossible without regulation). And I would submit that if you think about all of the technologies above, the argument that there is no role for the state here in the fullness of time is tantamount to radical techno-anarchism. It will all take decades to materialize, but step one, for a classical liberal who hopes to maintain some credibility through the duration of that period, is to be honest about the future. Believe you me, this honesty has cost me in more ways than one. But the problem is: we have bad intuitions about what laws are prudent! We are usually wrong about this. I know there is *something* profound about the role of the state in all this (perhaps especially profound if the role is limited and principled—we don’t really know what those principles should be!), but also knows that none of us today knows what it should be. So I am both skeptical of most current AI laws *and* skeptical that a purely libertarian posture will serve us well in the long term. So the interesting fight—and it is a decades long one—is not so much “should we heavily regulate AI tomorrow” (answer: almost certainly not) but “how should the committed classical liberal seek to transform his politics and ideology to meet this future head on?” How do the principles of limited government, rule of law, property, liberty etc. translate to this future we are building? One part of that is thinking about immediate next steps—which laws make sense in the here and now, and which do not. But it is a small part, in the grand scheme of things. I don’t claim to have made a lot of progress on this broader goal, but it *is* my goal every single day, and it explains most partings of ways I have in style and substance with many techno-libertarians. Also while I routinely disagree with Charlie, he’s one of the sharpest out there. You should follow him.
Charlie Bullock@CharlieBull0ck

I respect Dean's willingness to talk about topics like this. There are people who have similar beliefs about how AI will shape the future but don't discuss them publicly because claims like "there's a good chance that most of us won't be human 20 years from now" are unlikely to advance the cause of deregulation. Most voters are frightened by ideas like that, reasonably enough. I basically think that there are three coherent arguments for radical deregulation (talking about e.g. a16z's positions here, not Dean's views). You have to be either (1) a hardcore anarchist who's ideologically opposed to government intervention even in high-stakes national security contexts; (2) someone who's okay with human beings ceasing to exist, like the weirder accelerationists; or (3) a capabilities skeptic who doesn't believe that there's any non-negligible chance of transformative capabilities actually being developed prior to 2040 or whatever. I think most "techno-optimist" proponents of broad preemption etc. are basically just (3), ironically enough. That's a reasonable position, IMO, although recent capabilities developments make it harder to maintain if you're paying attention. But I have a much lower opinion of people who believe (2) in private and aren't honest about it.

English
7
5
66
12.2K
Internet Governance at Georgia Tech
@deanwball Do you consider the LLM service provider a content producer or an intermediary? That should answer the question of whether Section 230 should or should not apply to chatbots
English
1
0
0
65
Dean W. Ball
Dean W. Ball@deanwball·
Here is a good example of a non-legislative regulatory outcome that may well not have happened without tort liability. OpenAI was sued for harms to children on its platform, which social media cos largely can’t be due to Section 230. We really have no experience with how large, consumer-oriented web platforms will act when facing tort liability. To a first approximation, they haven’t really had to deal with it. This is one early result, and probably it’s a good result (we will see). On the flip side: as I’ve been saying for two years, AI is already more regulated than you think! Common law liability is a profoundly broad and powerful incentive.
OpenAI@OpenAI

We’re rolling out age prediction on ChatGPT to help determine when an account likely belongs to someone under 18, so we can apply the right experience and safeguards for teens. Adults who are incorrectly placed in the teen experience can confirm their age in Settings > Account. Rolling out globally now. EU to follow in the coming weeks. openai.com/index/our-appr…

English
8
12
127
19.8K
Internet Governance at Georgia Tech
Huston's data also shows that 95% of the transferred number blocks had been held by the selling party for more than 5 years. From 2017 - 2019, over 70% were held for 10 years or more. Since 2021, over 90% were held for more than 10 years.
English
0
0
0
37