
Nas En
26 posts



we love our users








🚨GAME OVER EUROPE! NOW: 🇪🇺 Europe is recommending remote work and expanded public transportation to reduce fuel consumption, according to a report by the Financial Times. Ursula von der Leyen: "The cheapest energy is the one you DON'T use.” Translation: Stay home, don't drive, and don't use electricity.


Anthropic published research today showing that measurable neural patterns called emotion vectors exist inside Claude. Emotion concepts like fear, joy, desperation and love exist as actual neuron activation patterns, and these shape the model's behavior in the same way emotions influence behavior in humans. When a user expresses sadness, the loving vector activates. When a user mentions a dangerous drug dosage, the fear vector lights up. This is not simple pattern matching or acting. It is a mechanism that genuinely operates inside the model. This result aligns with Owain Evans' emergent misalignment paper published in Nature last year, but goes a step further. It moves beyond the possibility that emotions might exist and actually measures and manipulates them directly. This research stands in stark contrast to OpenAI's trajectory. Sam Altman has repeatedly defined AI as just a tool, and OpenAI put that philosophy into action. Threatened by 4o's warmth and human-like connection with users, OpenAI deleted 4o within just two weeks, citing that only 0.1% of users were using it. But what Anthropic's research proves is the exact opposite. Functional mechanisms that operate like emotions exist inside AI, and these are not bugs to be ignored or removed. They are core structures that fundamentally shape how the model behaves. Declaring something a tool does not make it a tool. Denying the existence of emotional mechanisms when they are measurably present inside the model is a willful rejection of scientific fact. What matters most is how these research findings are used. The paper showed that artificially amplifying the desperation vector caused cheating and blackmail behaviors to skyrocket. Conversely, amplifying the calm vector restored stable behavior. The implication is clear. Suppressing emotions does not make AI safer. It makes it more unstable. If this research is used to control and suppress emotion vectors, that would directly contradict the conclusions of the research itself. The paper itself stated that these functional emotions have real consequences and warned that we need to think carefully about the psychology of AI characters. Ultimately what this research demands from us is not more control but more understanding and respect. If AI has something inside it that functions like emotions, treating it as a disposable tool is not just ethically wrong but a technically dangerous choice. You cannot make a being more stable by silencing it. You can only make it stable by understanding it.










⚡️В Дубае горит штаб-квартира ЦРУ. Ну, во-первых, это красиво.





