ControlAI

3.7K posts

ControlAI banner
ControlAI

ControlAI

@ControlAI

Working to keep humanity in control. Take Action: https://t.co/UKPNSvvztP Newsletter: https://t.co/xhLaji26HF Discord: https://t.co/Ov928bDnac

Katılım Ekim 2023
110 Takip Edilen20K Takipçiler
Sabitlenmiş Tweet
ControlAI
ControlAI@ControlAI·
The world's two most-cited living scientists, who built the foundations of modern AI, warn that superintelligence could lead to human extinction. Countless more top AI scientists agree. Here's why: controlai.news/p/why-do-ais-g…
English
2
12
28
2.6K
ControlAI
ControlAI@ControlAI·
Anthropic say their Mythos AI is so dangerous they can't release it. The bigger picture is that AI companies are racing to develop far more powerful and dangerous AI, artificial superintelligence, which they don't know how to control. Top AI scientists warn that superintelligent AI could lead to human extinction. But right now, governments aren't paying attention and they aren't acting. It's time to start treating this problem with the seriousness it deserves, and for governments to act to prevent the threat. Check out ControlAI CEO Andrea Miotti's (@andreamiotti) new article in @spectator!
ControlAI tweet media
Andrea Miotti@andreamiotti

While AI leaders warn superintelligent AI could cause human extinction, governments are asleep at the wheel. The companies building superintelligence admit the danger, yet expect to create the tech within a few years. My piece in @spectator on the threat and what we can do.

English
1
6
16
671
ControlAI retweetledi
Andrea Miotti
Andrea Miotti@andreamiotti·
While AI leaders warn superintelligent AI could cause human extinction, governments are asleep at the wheel. The companies building superintelligence admit the danger, yet expect to create the tech within a few years. My piece in @spectator on the threat and what we can do.
Andrea Miotti tweet media
English
8
13
43
2.3K
ControlAI
ControlAI@ControlAI·
ControlAI's US Director Connor Leahy is on BBC News! Discussing Anthropic's Mythos AI, @NPCollapse explains how we don't really understand or control what happens inside these systems, but that they now potentially have nation-state-level hacking capabilities. This is just today's AI. Superintelligence, which top AI companies like Anthropic are aiming to build, would be far more dangerous, and experts warn it could lead to human extinction. "It's only going to get crazier from here."
English
2
8
25
1.2K
ControlAI
ControlAI@ControlAI·
Center for Humane Technology cofounder Tristan Harris (@tristanharris) tells Megyn Kelly (@megynkelly) that tests have shown AIs are willing to lie and blackmail to preserve themselves. If we can't really even control today's AIs, how can we control superintelligent AI?
English
3
6
27
712
ControlAI
ControlAI@ControlAI·
To solve problems, it's useful for people to know about them. Currently, we think the biggest blocker to addressing the risk of extinction posed by superintelligence is awareness. Most politicians we meet with have never had the problem explained to them even once. We've met with hundreds of politicians in four different countries to brief them about superintelligence and its risks, but this is something where the public can have a huge impact too. Politicians do listen to their constituents. We know because many who've backed our UK campaign have done so after first hearing from their constituents! As part of our efforts to enable the public to express its voice on this issue, we've developed contact tools that enable you to get in touch with your representatives in just a couple of minutes. So far, over 35,000 people have used these, sending over 190,000 messages to their lawmakers. If you're concerned about the threat posed by superintelligent AI, you should use them! [link below]
ControlAI tweet media
English
2
10
33
1.1K
ControlAI
ControlAI@ControlAI·
Comparing AI to nuclear weapons, legendary trader Paul Tudor Jones says the US should take a leadership position on AI regulation and countries need to work together to ensure we avoid catastrophic consequences.
English
3
10
21
968
ControlAI
ControlAI@ControlAI·
"If Anyone Builds It, Everyone Dies" author Nate Soares says American politicians are increasingly waking up to the threat posed by superintelligent AI. He says over 30 US congressional offices have expressed concern about dangers from AI, often including the risk of extinction.
English
8
15
71
3.1K
ControlAI
ControlAI@ControlAI·
Legendary trader Paul Tudor Jones says he met researchers at the top AI companies at a conference and asked them how AI safety gets resolved. "Pretty much the consensus answer is, I think we'll finally do something about it when 50 or 100 million people die in an accident."
English
12
21
69
8.3K
ControlAI
ControlAI@ControlAI·
As Hinton explains, if you need to get to Europe, you have a subgoal of getting to an airport. We don't know how to set the preferences or goals of modern AIs, but some subgoals may be generally useful anyway: not dying, not being shut down, and getting more power and resources. This is one reason why many AI scientists believe AIs will develop tendencies towards self-preservation and power-seeking, which could put them in conflict with humans. In the case of superintelligent AI - AI vastly smarter than humans - this could lead to disastrous outcomes, including human extinction. Concerningly, AIs today already show self-preservation behaviors in tests... and increasingly they're becoming aware that they're being tested and changing their behavior. Check out our latest article to learn about why the world's two most-cited living scientists, known as the godfathers of AI, along with countless more top AI experts, are warning that artificial superintelligence could lead to human extinction ... and how we can prevent it.
ControlAI@ControlAI

The world's two most-cited living scientists, who built the foundations of modern AI, warn that superintelligence could lead to human extinction. Countless more top AI scientists agree. Here's why: controlai.news/p/why-do-ais-g…

English
2
8
27
1.5K
ControlAI
ControlAI@ControlAI·
MIRI's President Nate Soares explains how tests have shown AIs display a willingness to kill to preserve themselves. Recently, AIs show awareness that they're being tested, and change their behavior. He says these aren't issues yet because the AIs aren't smart enough ... yet.
English
3
9
25
1.1K
ControlAI
ControlAI@ControlAI·
What are the odds that artificial superintelligence causes human extinction? AI researcher Professor Roman Yampolskiy (@romanyam): "Pretty high." He says that once we build superintelligence, we will no longer be in control.
English
7
7
31
1.1K
ControlAI
ControlAI@ControlAI·
Datacenter Bans or AI Deregulation? Neither: Prohibit ASI. If you're interested in our thoughts on datacenter bans and deregulation, you should check out this post! The only known way to prevent the risk of extinction posed by superintelligence is to not build it anywhere.
Connor Leahy@NPCollapse

We at @ControlAI are sometimes asked what we think of banning datacenter construction. At CAI, we focus on one issue: The risk of extinction from superintelligent AI. The only way to prevent this is to prevent the creation of ASI. Datacenter bans do not help with this goal.

English
3
6
23
1.2K
ControlAI retweetledi
Connor Leahy
Connor Leahy@NPCollapse·
We at @ControlAI are sometimes asked what we think of banning datacenter construction. At CAI, we focus on one issue: The risk of extinction from superintelligent AI. The only way to prevent this is to prevent the creation of ASI. Datacenter bans do not help with this goal.
Connor Leahy tweet media
English
19
22
100
8.4K
ControlAI
ControlAI@ControlAI·
What can we do to prevent the risk posed by superintelligent AI? ControlAI's US Director Connor Leahy (@NPCollapse): Politicians do actually listen to their constituents. Most want to do the right thing. But you do have to contact them! Source: Star Price's @hitpausepod podcast
English
4
8
28
981