
Nirit Weiss-Blatt, PhD
2.2K posts

Nirit Weiss-Blatt, PhD
@DrTechlash
Communication Researcher, analyzing the tech discourse. Book Author: The TECHLASH. Substack: https://t.co/4SJJhqrzXn Signal: DrTechlash.16





Over the past year, AI agents have learned how to self-replicate. In our test environment, an agent hacks a remote computer and copies itself onto it. Each copy then hacks more computers, forming a chain.

















I'm not sure why it's so hard for LessWrong people to know what we're talking about when we say "Y'all are too into violent rhetoric, maybe cut it out?" I think they need help. I mean this non-bitingly and non-sarcastically. I think the internet needs to explain to the more receptive members of the LessWrong community why and how this is a problem. Maybe if we do that enough, we'll get through? E.g., I think Ryan Greenblatt is *not* needlessly violence-themed in his writing, yet also genuinely doesn't know what I'm talking about when I complain about it. With the recent violent attacks on Sam Altman, and a lot of reactions like "Wtf AI safety people, tame it down!", I wonder if some more actual good-faith messages could help. Like: "No seriously, we're not trying to be mean about this, it's just actually a problem how much your community promotes violent memes in connection with AI and AI safety. Can you please try to understand this and then explain it to your friends?"










