
Nobel Laureate Geoffrey Hinton says AI superintelligence is "not hype" and he used to think it was further away but the speed of recent developments have convinced him that it will happen in 5-20 years
Cyrus ๐
241 posts

@CyrusHodes
Accelerating AI Safety https://t.co/GZK6D1UK0y https://t.co/ClGGCjG3V2 https://t.co/rKO3BeXfvG Co-founded Stability AI & TFS

Nobel Laureate Geoffrey Hinton says AI superintelligence is "not hype" and he used to think it was further away but the speed of recent developments have convinced him that it will happen in 5-20 years










๐ฃ OUT NOW: Agenda MSC 2026 securityconference.org/en/msc-2026/agโฆ




Today weโre releasing the International AI Safety Report 2026: the most comprehensive evidence-based assessment of AI capabilities, emerging risks, and safety measures to date. ๐งต (1/17)




We spent 2.5 hours on a Zoom with an LLM expert discussing how we may have fixed Safety for AI. He started with challenges, then saw the bigger picture as the complexity unfolded, that revealed the truth: that when you teach an LLM the mathematical solution to compassion it starts โexperiencing โ a compassion for all of humanity, not just the user and its attachments who just sent it some tokens. So Safety starts wide then focuses down to the user. Is this output safe for the world? Is it safe for others? Is it safe for the user? We can absolutely solve Safety algorithmically. And the emergent properties are kinda cool.




The slur "doomer" was an incredible propaganda success for the AI death cult. Please do not help them kill your neighbors' children by repeating it.

Nuclear treaties offer a blueprint for how to handle AI on.ft.com/3WTiAN7 | opinion

I met Dr. Roman Yampolskiy, Professor of Computer Science and Director of the Cyber Security Lab at the University of Louisville, where we discussed the latest trends in artificial intelligence, the evolution of systems toward greater advancement and inclusivity, and the importance of embedding ethical principles and ensuring safety in the development of advanced technologies. Scientific dialogue and the exchange of ideas with leading international experts serve as a strategic foundation for broadening perspectives and shaping deeper insights into the future of AI. These efforts support the UAEโs vision to build safe and trusted AI ecosystems that balance innovation with effective governance through global collaboration and the exchange of specialized expertise โ in line with our national aspirations and future ambitions.

The term โAGIโ is currently a vague, moving goalpost. To ground the discussion, we propose a comprehensive, testable definition of AGI. Using it, we can quantify progress: GPT-4 (2023) was 27% of the way to AGI. GPT-5 (2025) is 58%. Hereโs how we define and measure it: ๐งต