Bobby retweetledi

AI is not some sort of natural phenomenon that will just emerge and become dangerous.
*WE* design it and *WE* build it.
I can imagine thousands of scenarios where a turbojet goes terribly wrong.
Yet we managed to make turbojets insanely reliable before deploying them widely.
The question is similar for AI:
"do we think there exists at least one design of an AI system that is simultaneously safe/controllable, and can fulfill objectives in more intelligent ways than humans ?"
If the answer is yes, we'll be fine.
If the answer is no, we won't build it.
Right now, we don't even have a hint of a design of a human-level intelligent system.
So it's too early to worry about it.
And it's way too early to regulate it to prevent "existential risk."
English



