
John Sherman
3.3K posts

John Sherman
@ForHumanityPod
2x Dad, Podcast Host, President of The AI Risk Network, Founder of GuardRailNow, 2x Jeopardy Question, Peabody Award-Winning Independent Journalist


People always ask me: But how could AI kill us all? So, here is how I think AI will literally kill us all. It's just a guess, I'm just a human. I wrote this essay for a book nearly two years ago. It's not something I like to talk about. But the academic presentation of AI risk is just not working. We need real human emotion in this debate. When we say we're all going to die if we don't change course, and our faces don't show it, it doesn't connect.


People always ask me: But how could AI kill us all? So, here is how I think AI will literally kill us all. It's just a guess, I'm just a human. I wrote this essay for a book nearly two years ago. It's not something I like to talk about. But the academic presentation of AI risk is just not working. We need real human emotion in this debate. When we say we're all going to die if we don't change course, and our faces don't show it, it doesn't connect.





@JeffLadish Here’s my take on this. Seems to me we can only guess the process, but the result is near certain.

"Massive investment in AI contributed basically zero to US economic growth last year," per Goldman Sachs








Neil deGrasse Tyson ended tonight's debate with an impassioned plea for an international treaty to ban creating the sort of superintelligent AI that could kill us all.

In The Guardian: An AI security researcher reports that an AI at an unnamed California company got "so hungry for computing power" it attacked other parts of the network to seize resources, collapsing the business critical system. This relates to a fundamental issue in AI: developers do not know how to ensure the systems they're developing are reliably controllable. Top AI companies are currently racing to develop superintelligence, AI vastly smarter than humans. None of them have a credible plan to ensure they could control it. With superintelligent AI, the stakes are much greater than collapse of a business system. Leading AI scientists and even the CEOs of the top AI companies have warned that superintelligence could lead to human extinction.






