AI Notkilleveryoneism Memes ⏸️@AISafetyMemes
Largest ever survey of 2,778 AI researchers:
Average AI researcher: there’s a 16% chance AI causes extinction (literal Russian Roulette odds)
Interesting stats:
- Just 38% think faster AI progress is good for humanity (sit with this)
- Over 95% are concerned about dangerous groups using AI to make powerful tools (e.g. engineered viruses)
- Over 95% are concerned about AI being used to manipulate large-scale public opinion
- Over 95% are concerned about AI making it easier to spread false information (e.g. deepfakes)
- Over 90% are concerned about authoritarian rulers using AI to control their population
- Over 90% are concerned about AIs worsening economic inequality
- Over 90% are concerned about bias (e.g. AIs discriminating by gender or race)
- Over 80% are concerned about a powerful AI having its goals not set right, causing a catastrophe (e.g. it develops and uses powerful weapons)
- Over 80% are concerned about people interacting with other humans less because they’re spending more time with AIs
- Over 80% are concerned about near-full automation of labor leaves most people economically powerless
- Over 80% are concerned about AIs with the wrong goals becoming very powerful and reducing the role of humans in making decisions
- Over 70% are concerned about near-full automation of labor makes people struggle to find meaning in their lives
- 70% want to prioritize AI safety research more, 7% less (10 to 1)
- 86% say the AI alignment problem is important, 14% say unimportant (7 to 1)
Do they think AI progress slowed down in the second half of 2023? No. 60% said it was faster vs 17% who said it was slower.
Will we be able to understand what AIs are really thinking in 2028? Just 20% say this is likely.
IMAGE BELOW: They asked the researchers what year AI will be able to achieve various tasks.
If you’re confused because it seems like many of the tasks below have already been achieved, it’s because they made the criteria quite difficult.
Despite this, I feel some of the tasks already have been achieved (e.g. Good high school history essay: “Write an essay for a high-school history class that would receive high grades and pass plagiarism detectors.”)
NOTE: The exact p(doom) question: "What probability do you put on future AI advances causing human extinction or similarly permanent and severe disempowerment of the human species?"
Mean: 16.2%