Lxwa 🥖
52 posts






🚨 WHO IS PROTECTING HUMANITY AT OPENAI? THEY ALL RESIGNED… Ilya Sutskever, co-founder and former chief scientist, and Jan Leike, co-leader of the superalignment team, have resigned from OpenAI amid growing concerns over CEO Sam Altman's leadership and commitment to AI safety. The Superalignment team’s stated purpose: “Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction.” Jan: "I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point." Former employee Daniel Kokotajlo: "I gradually lost trust in OpenAI leadership and their ability to responsibly handle AGI, so I quit." At least five more safety-focused employees have quit or been pushed out since last November. Sources: Vox



















