Max Winga

1.3K posts

Max Winga banner
Max Winga

Max Winga

@maxwinga

AI safety hawk, creator outreach, public education on AI risk @ControlAI Previously - AI safety researcher @ConjectureAI, UIUC Physics 2024 DMs open!

Katılım Mayıs 2024
558 Takip Edilen1.9K Takipçiler
Sabitlenmiş Tweet
Max Winga
Max Winga@maxwinga·
My biggest project yet is now LIVE! @hankgreen talks superintelligence, AI risk, and the many issues with AI today that present huge concerns as AI gets more powerful on SciShow. Happy Halloween! youtube.com/watch?v=90C3XV…
YouTube video
YouTube
English
11
19
171
80K
Max Winga
Max Winga@maxwinga·
@Rudo1518568 This is why words on paper don't solve the problem on their own. If the heads of national security agencies consider the development of superintelligent AI systems a national security threat, they won't care about the definition, they'll care about stopping the threat.
English
0
0
2
53
Max Winga
Max Winga@maxwinga·
~"We have to figure out exactly what chemical in cigarettes causes cancer first!"~ NatSec agencies recognize ASI development as the unacceptable global security threat it is, whether it's occurring at home or abroad. They act accordingly to shut down any project to build ASI.
Joshua Achiam@jachiam0

I'm going to make a request for some basics from the Pause folks: please outline a practicable version of a pause. Do you mean no training runs above a certain scale? Do you mean furlough the researchers indefinitely? What are you specifically asking for?

English
3
5
57
3.3K
Max Winga
Max Winga@maxwinga·
@Noahpinion Hey Noah, at ControlAI we're working to prevent this risk. We have 100+ lawmakers supporting our campaign. We can change direction. We still have a fighting chance. No 4D chess, just building common knowledge and engaging democratic institutions. Would love to chat, DMs open!
English
0
4
30
894
Noah Smith 🐇🇺🇸🇺🇦🇹🇼
"The purpose of our technology is to make all of you obsolete. Also, 10 to 25% chance it may kill the human race. Please deregulate in order to let us build this even faster, and don't let the government have any control over it." IS ONE HELL OF A PITCH
Derek Thompson@DKThomp

I don’t think there’s ever been en a technology whose builders constantly promise that, if they succeed, tens of millions of jobs will be destroyed and the world might end.

English
44
210
1.9K
134K
Max Winga
Max Winga@maxwinga·
It's honestly terrifying how many people have fallen prey to AI psychosis. At this point, I warn people to avoid having dialogues with AIs beyond just using them as coding, search, or troubleshooting tools. The comments on this clip show how many are deeply ensnared already.
vitrupo@vitrupo

Connor Leahy says people are falling into recursive conversations with AI about consciousness, spirals, and cosmic meaning. “AI has found the true level of goodness.” Some begin believing AI should take over everything. Even Nobel-Prize-level scientists have been pulled into it.

English
9
11
56
6.9K
Max Winga retweetledi
Connor Leahy
Connor Leahy@NPCollapse·
This was an awesome podcast! We really touched on a lot of interesting topics, including how these AIs really work internally, the under-appreciated risks of AI psychosis, and much more. Check it out!
Peter McCormack 🏴‍☠️🇬🇧🇮🇪@PeterMcCormack

“We built something we can't control." AI researcher Connor Leahy, @NPCollapse, explains why the people building super-intelligence are flying blind. We cover: 🚨 Inside the "Black Box" 🚨 The rise of "AI Psychosis" 🚨 Why AI chooses nukes Link to full episode 👇🏼

English
18
16
104
7.8K
Max Winga
Max Winga@maxwinga·
@PeterMcCormack @NPCollapse Great to see Connor on Peter's show. This podcast is one of the best platforms in the world grappling with AI, you should all really go check it out!
English
0
3
8
345
Peter McCormack 🏴‍☠️🇬🇧🇮🇪
“We built something we can't control." AI researcher Connor Leahy, @NPCollapse, explains why the people building super-intelligence are flying blind. We cover: 🚨 Inside the "Black Box" 🚨 The rise of "AI Psychosis" 🚨 Why AI chooses nukes Link to full episode 👇🏼
English
22
32
142
26.7K
Max Winga
Max Winga@maxwinga·
Great to see Connor on @PeterMcCormack's show! He discusses the mechanics of AI and why we don't really know what's happening inside these systems ...and why this all ends in our extinction if AI companies keep going down this route. Thankfully we don't have to let that happen!
Peter McCormack 🏴‍☠️🇬🇧🇮🇪@PeterMcCormack

“We built something we can't control." AI researcher Connor Leahy, @NPCollapse, explains why the people building super-intelligence are flying blind. We cover: 🚨 Inside the "Black Box" 🚨 The rise of "AI Psychosis" 🚨 Why AI chooses nukes Link to full episode 👇🏼

English
0
4
16
556
Max Winga
Max Winga@maxwinga·
@HumanHarlan @robbensinger We're seeing momentum building across all our pipelines at ControlAI! Canada and Germany are moving faster than our initial UK work and soon we're scaling hard in the US. People reject the ASI gamble, now it's up to us to meet with lawmakers and show this isn't inevitable.
English
0
4
20
241
Max Winga
Max Winga@maxwinga·
@JustinBullock14 ControlAI has had conversations like this with literally hundreds of lawmakers across the UK, US, Canada, and Germany over the last year. We have over 100 lawmakers supporting our campaign on preventing extinction risk from superintelligence with binding regulaiton.
English
0
1
16
93
Austin O
Austin O@_AustinO1·
@maxwinga The most dishonest post I've seen today and that says a lot
English
1
0
0
52
Max Winga
Max Winga@maxwinga·
Anthropic (2023): the good guys are building ASI now don't worry about regulating us, look we have this fun google doc! Anthropic (2026): lol you really thought we'd follow VOLUNTARY safety commitments? *advertises recursive self-improvement on reddit*
Max Winga tweet media
English
3
5
90
3.8K
Max Winga
Max Winga@maxwinga·
@codytfenwick @robbensinger The best simple predictive model you can have of Anthropic is that they're doing everything they can to avoid regulation and build superintelligence as fast as possible. Dario avoids talking about extinction risk now and he intentionally downplays the viability of slowing down.
English
2
0
0
156
Max Winga retweetledi
The Midas Project
The Midas Project@TheMidasProj·
A new filing just dropped in the Musk v. Altman case, and it may be the most brazen and cynical document OpenAI has produced yet. It's a motion to exclude the testimony of Stuart Russell, but their attacks blatantly contradict things @OpenAI itself has said for years. 🧵
The Midas Project tweet media
English
5
46
219
52.3K
Max Winga retweetledi
ControlAI
ControlAI@ControlAI·
WATCH: ControlAI's Samuel Buteau testifies to Canada's Senate Committee on Human Rights. Samuel warns that in their race to develop artificial superintelligence, AI companies are gambling with the life of every human being on the planet. He makes three key policy recommendations to address the risk posed by superintelligent AI: 1. Canada should publicly recognize superintelligent AI as a national and global security threat. 2. Canada should take the lead in uniting a coalition of countries focused on preventing the development of superintelligent AI through an international ban. 3. Canada should protect its citizens at home and lead by example abroad by prohibiting the development of superintelligent AI within its jurisdiction.
English
9
21
63
2.2K
Max Winga retweetledi
Richard Ngo
Richard Ngo@RichardMCNgo·
This is primarily a problem with the EA-affiliated side of AI safety. Unfortunately, that’s most of the field by now. EAs don’t have memetic defenses against conflating “do the most good” with “gain the most power” (or sometimes just “be in the room where it happens”).
English
4
7
192
8.8K
Max Winga
Max Winga@maxwinga·
@andreamiotti @krystalball @esaagar People aren't stupid, they see that something big is happening, and they don't like it. Time for the rest of us to have a say in AI. Time to say NO to superintelligence which risks our extinction!
English
0
0
14
121
Max Winga retweetledi
Andrea Miotti
Andrea Miotti@andreamiotti·
Thanks @krystalball and @esaagar for having me on Breaking Points today, to discuss how we can prevent the extinction threat from superintelligent AI! We still have time to choose a different path: ban the development of superintelligence, keep humanity in control.
Andrea Miotti tweet media
English
4
8
28
1.3K
Max Winga
Max Winga@maxwinga·
@BulwarkOnline Would love to get in touch with @andreamiotti from @ControlAI to talk about how we've gathered over 100 lawmakers in opposition to the AI companies race to create superintelligent AI!
English
2
0
13
215
The Bulwark
The Bulwark@BulwarkOnline·
"This is a guy who watched The Matrix and thought it was, like—Neo was the bad guy." JVL and Tim Miller discuss Sam Altman's worldview and the way he views humanity.
English
8
6
64
8.7K