Cyrus 🌎

241 posts

Cyrus 🌎 banner
Cyrus 🌎

Cyrus 🌎

@CyrusHodes

Accelerating AI Safety https://t.co/GZK6D1UK0y https://t.co/ClGGCjG3V2 https://t.co/rKO3BeXfvG Co-founded Stability AI & TFS

Somewhere tropical Joined Ocak 2016
1.5K Following1.1K Followers
Pinned Tweet
Cyrus 🌎
Cyrus 🌎@CyrusHodes·
To clarify: - frontier AI companies clearly state they are building AGI - from general to super intelligence (aka intelligence explosion), there is an unknown period (prob fast) - controlling an ASI is virtually impossible - why isn't this top of the global discourse?
GIF
Tsarathustra@tsarnick

Nobel Laureate Geoffrey Hinton says AI superintelligence is "not hype" and he used to think it was further away but the speed of recent developments have convinced him that it will happen in 5-20 years

English
1
1
17
1.4K
Cyrus 🌎 retweeted
Acyn
Acyn@Acyn·
Khanna: They’re saying this technology is going to be bigger than nuclear, bigger than electricity, bigger than aviation. Last I checked, we have an FAA for aviation, nuclear energy is regulated, and electricity is regulated.  So you’re telling me on one hand that this is going to be bigger, and then you’re saying you don’t want any regulation. I mean, it makes no sense.
English
113
1.7K
9K
409.1K
Cyrus 🌎 retweeted
Ricardo
Ricardo@Ric_RTP·
The CEO of Google DeepMind just admitted that if the decision had been his, we would've cured cancer before anyone ever used ChatGPT. And that's not even the scariest thing he said on a recent interview. Demis Hassabis is one of the most important people alive in AI. He won the Nobel Prize last year for AlphaFold, the system that cracked the 50 year protein folding problem. 3 million scientists now use his tool. Almost every new drug being developed will touch it at some stage. In a new interview, he was asked about the moment ChatGPT launched and Google went into "code red." His answer was one of the most revealing things any AI leader has ever said on the record: "If I'd had my way, I would have left AI in the lab for longer. Done more things like AlphaFold. Maybe cured cancer or something like that." Read that again. The man running Google's entire AI division is publicly saying the commercial AI race we're all living through was a MISTAKE. That the industry got hijacked by a chatbot when it could have been solving the biggest problems in science and medicine. His vision was simple: Build AI slowly, carefully, like CERN. Use it to crack root node problems one at a time. Cancer. Energy. New materials. Let humanity benefit from real breakthroughs while the foundational science was figured out over a decade or two. Then ChatGPT dropped in November 2022 and everything changed. Demis described what happened next as getting locked into a "ferocious commercial pressure race" that none of the labs can escape from. On top of that, the US vs China dynamic added geopolitical pressure. The result is everyone sprinting toward products instead of breakthroughs, shipping chatbots while the scientific opportunity gets buried under marketing cycles and quarterly earnings. But he's not saying progress isn't happening... He's saying the progress got redirected away from the things that actually matter most. And then it got even scarier: Because when Demis was asked what he worries about with AI, he laid out two threats. The first is what everyone talks about: Bad actors using AI for harm. Terrorist groups. Hostile nation states. Cyberattacks at scale. But that's not the threat he's most worried about. His second worry is AI itself going rogue. Not today's models. The models coming in the next two to four years as the industry enters what he calls "the agentic era." Systems that can complete entire tasks autonomously. Systems that are increasingly capable and increasingly hard to control. His exact words: "How do we make sure the guardrails are put in place so they do exactly what they've been told to do, and there's no way of them circumventing that or accidentally breaching those guardrails? That's going to be an incredibly hard technical challenge if you think about how powerful and smart and capable these systems eventually get." A Nobel Prize winner who runs one of the 3 most advanced AI labs on Earth just said publicly that within two to four years, we're entering a phase where AI alignment becomes a real problem, and the technical challenge of solving it is enormous. And almost nobody is paying enough attention. He called for international cooperation between labs, AI safety institutes, and academia to tackle the problem. He said this is the thing even the experts aren't thinking about enough. He said the only way to get through the AGI moment safely is if everyone starts treating this with the seriousness it deserves. Most AI CEOs give you careful PR answers about "responsible development" and move on. Demis said something different... He said the commercial race FORCED us into a premature deployment of a technology we barely understand, and the window to get alignment right before the next generation of agents shows up is two to four years. If the man who built the system that might cure cancer is telling you he wishes it had happened first, maybe we should listen to what he says is coming next.
English
292
1K
5.3K
926.7K
Cyrus 🌎 retweeted
Future of Life Institute
Future of Life Institute@FLI_org·
‼️New polling from @AIpolicynetwork finds that American voters overwhelmingly would prefer guardrails on AI over any other option - and would rather ban AI outright than proceed without regulation. 📢 Lawmakers, are you listening?
Future of Life Institute tweet media
English
15
79
289
87.5K
Cyrus 🌎 retweeted
Max Tegmark
Max Tegmark@tegmark·
We're excited to launch the Pro-Human AI Declaration, laying out a more inspiring AI path than Silicon Valley's dystopian race-to-replace. It has remarkably broad support, from Bannon to Bengio, from unions to faith groups, from parents to NatSec leaders. Please join our growing movement and let's make a difference! (Links below in replies)
Max Tegmark tweet media
English
47
79
326
33.8K
Cyrus 🌎
Cyrus 🌎@CyrusHodes·
Disappointed that your report missed the opportunity to address AGI and national security risks. While all the labs are racing toward building superintelligence. Loss of Control and Misalignment should be on top of the discussions at MSC. It's time for the AI and defense discussions to go beyond LAWS... Hopefully next year
English
0
0
0
32
Munich Security Conference
Munich Security Conference@MunSecConf·
Many people think their political systems and international institutions are incapable of addressing mounting global risks. As a result, the sense of helpnessless across many Western societies has reached a record high this year.
Munich Security Conference tweet media
English
9
14
22
6.8K
Marc Landers
Marc Landers@marclanders·
44-year study showed Cannabis users showed LESS brain aging/decline! Main finding: Men who had ever used cannabis actually showed less drop in their cognitive (thinking) abilities over the decades compared to men who never used it. Among the cannabis users, things like starting at a young age or using it frequently did not lead to worse long-term cognitive decline. Overall conclusion: In this large group of men tracked for over 40 years, cannabis use did not cause harmful long-term effects on age-related brain aging or cognitive decline. pubmed.ncbi.nlm.nih.gov/39508467/
Marc Landers tweet media
English
161
705
4.6K
430.2K
Cyrus 🌎 retweeted
Geoffrey Hinton
Geoffrey Hinton@geoffreyhinton·
This is a great report that provides a thoughtful, detailed and very well researched description of the risks of AI. It is essential reading for anyone who wants to write or talk about AI risks.
Yoshua Bengio@Yoshua_Bengio

Today we’re releasing the International AI Safety Report 2026: the most comprehensive evidence-based assessment of AI capabilities, emerging risks, and safety measures to date. 🧵 (1/17)

English
105
265
1.2K
188.6K
Cyrus 🌎
Cyrus 🌎@CyrusHodes·
I don't know man, so hard to respond. My take, based on mounting evidence and extremely credible eye witnesses from USG and beyond is that: 1. We are NOT alone 2. There are multiple types of NHIs interacting with us 3. Even more since we have deployed and used nukes 4. We are giving birth to the sand gods, and NHIs would have also noticed 5. Because of 3. and 4. I would argue you cannot compare our level to NHIs, to factory farmed animals and us...
English
0
0
0
9
Daniel Faggella
Daniel Faggella@danfaggella·
2025 moved my worldview around posthumanism / agi in 2 big ways, I’ve become - more optimistic that an intelligence explosion will include us (for a short bit) rather than immediately kill us - more pessimistic that western civ will last long enough to bloom into agi
English
4
0
37
1.6K
Cyrus 🌎 retweeted
Daniel Kokotajlo
Daniel Kokotajlo@DKokotajlo·
Some people are unhappy with the AI 2027 title and our AI timelines. Let me quickly clarify: We’re not confident that: 1. AGI will happen in exactly 2027 (2027 is one of the most likely specific years though!) 2. It will take <1 yr to get from AGI to ASI 3. AGIs will definitely be misaligned We’re confident that: 1. AGI and ASI will eventually be built and might be built soon 2. ASI will be wildly transformative 3. We’re not ready for AGI and should be taking this whole situation way more seriously 🧵 with more details
English
118
94
1.1K
198.1K
NEXTA
NEXTA@nexta_tv·
🍄‍ Chinese scientists have created a fungus that can be eaten instead of meat Researchers say this gastronomic breakthrough could also help solve the global protein shortage. They used Fusarium venenatum as the base — the same filamentous fungus used to produce mycoprotein. Put very simply, the scientists “switched off” two of its genes, and now the fungus is easy to digest, tastes very similar to meat, and is even more nutritious than ordinary protein. Environmental experts are thrilled as well — growing this fungus has a lower climate impact than producing chicken or cultivated meat. Bon appétit! 🍽
NEXTA tweet media
English
58
90
490
85.4K
Cyrus 🌎 retweeted
AI Notkilleveryoneism Memes ⏸️
The memetic shadow war over the slur "doomer" (the D-word) What’s happening? "Imagine how many more people would have died of lung cancer if the cigarette companies had invented a slur for the people who tried to explain about lung cancer." It began a few years ago - accelerationists/successionists skilled in memetic warfare started pushing it HEAVILY, explicitly trying to discredit us by painting AI risk falsely as a few fringe extremists (It's the OPPOSITE, in fact - we have the support of the world's most credible people! Godfathers of AI, dozens of Nobel laureates, prime ministers, presidents, the world's richest man, the fucking POPE etc etc. It's a dream come true list.) The D-word screams "they're tinfoil hat crazies, not serious people, so you can safely ignore their actual arguments." When you say the D-word, you're repeating THEIR propaganda that we're just some fringe extremists. It's a HUGE own goal. TONS of groups don't have a 'good word' to name themselves. AND THAT'S FINE, ACTUALLY. It's better than wearing a tinfoil hat as a nametag! The people warning about lung cancer didn't have a single word label either. This is normal! HOW YOU CAN HELP: Whenever you see someone using the D-word, call it out as the "D-word" Many new people, including journalists, are picking it up because they don't know it's a slur - they aren't trying to insult anyone, they’re just repeating what they heard others say. It spreads because it's memetically fit (it rhymes with zoomers/boomers) - so we must kill it with stigma. Most people won't use words that are stigmatized. NOTE TO AUTISTIC RATIONALISTS: if you can't see how doomer is a slur, you're playing a simulacrum level 1 game against vastly more skilled memetic warriors operating at higher levels. Your autism is a superpower in some areas, but it blinds you to the games being played here.
Eliezer Yudkowsky ⏹️@ESYudkowsky

The slur "doomer" was an incredible propaganda success for the AI death cult. Please do not help them kill your neighbors' children by repeating it.

English
53
10
112
25.3K
Cyrus 🌎 retweeted
Anthony Aguirre
Anthony Aguirre@AnthonyNAguirre·
In Keep the Future Human, I make the case that instead of racing to build AGI/ASI, companies should instead focus on Tool AI. The statement on superintelligence suggests the same. I wrote up this infographic to briefly explain this alternate pathway.
Anthony Aguirre tweet media
English
3
3
21
951