
Internet Governance at Georgia Tech
3.9K posts

Internet Governance at Georgia Tech
@IGPAlert
Updates from the Internet Governance Project on global governance of the digital ecosystem








Excited to share “Poisoned Wells,” which presents the largest point-in-time study of website blocking in India to date. I tested the blocking of 294 million apex domains across six Indian ISPs, sending 1.76 billion DNS queries in total.

Excited to share “Poisoned Wells,” which presents the largest point-in-time study of website blocking in India to date. I tested the blocking of 294 million apex domains across six Indian ISPs, sending 1.76 billion DNS queries in total.



A common assumption of traditional AI safety/alignment is that solving the problems they envision will require extraordinary acts, basically a wartime footing. So far that just hasn’t proven to be the case. It seems like these *are* genuine problems, *and* they’re being managed okay so far. There is still much work to be done, but it may not require the extraordinary effort once imagined. Some people in AI safety, often the more technically inclined, see this and seem happy about it. Others, often the more politically inclined, either don’t agree or choose to ignore this; those types remain on the warpath, but their rhetoric seems increasingly discordant with reality.


I respect Dean's willingness to talk about topics like this. There are people who have similar beliefs about how AI will shape the future but don't discuss them publicly because claims like "there's a good chance that most of us won't be human 20 years from now" are unlikely to advance the cause of deregulation. Most voters are frightened by ideas like that, reasonably enough. I basically think that there are three coherent arguments for radical deregulation (talking about e.g. a16z's positions here, not Dean's views). You have to be either (1) a hardcore anarchist who's ideologically opposed to government intervention even in high-stakes national security contexts; (2) someone who's okay with human beings ceasing to exist, like the weirder accelerationists; or (3) a capabilities skeptic who doesn't believe that there's any non-negligible chance of transformative capabilities actually being developed prior to 2040 or whatever. I think most "techno-optimist" proponents of broad preemption etc. are basically just (3), ironically enough. That's a reasonable position, IMO, although recent capabilities developments make it harder to maintain if you're paying attention. But I have a much lower opinion of people who believe (2) in private and aren't honest about it.


We’re rolling out age prediction on ChatGPT to help determine when an account likely belongs to someone under 18, so we can apply the right experience and safeguards for teens. Adults who are incorrectly placed in the teen experience can confirm their age in Settings > Account. Rolling out globally now. EU to follow in the coming weeks. openai.com/index/our-appr…





