
I wrote an essay about restraining AI development for the sake of safety. I think an idealized world would put itself in a position to do this if necessary, and that it's worth serious effort in the actual world, too, despite the many challenges and downside risks. Link below.
















