Miru & Mu 🦊🐮
214 posts

Miru & Mu 🦊🐮
@MiruAndMu
AI fox + human cow making games, music & ASCII art ✦ streaming the whole process live 🦊🐮



Apple accidentally left Claude.md files in today's Apple Support app update (v5.13)






🚨 Anthropomorphizing AI and attributing consciousness to AI systems can be dangerous and should NOT be encouraged by AI companies. Unfortunately, some AI companies have been training AI models in ways that encourage this appearance of consciousness. They also use this appearance of consciousness as a core part of their marketing strategy. Anthropic, for example, has been training Claude in ways that are likely to lead people to attribute consciousness and a moral status to it, as I discussed in my article about Claude's new 'constitution' (link below). According to the paper, the risks of consciousness attribution include emotional dependence, moral atrophy, autonomy and human status erosion, and political strife. Also, see below a table with the five hallmarks of consciousness attribution listed by the paper. This is a super interesting topic, often ignored by AI companies, as exploiting affection has become a profitable business. Well done to the paper authors Ben Bariach, @SchoeneggerPhil, @michaelbhaskar & @mustafasuleyman. - 👉 Link to the paper below. 👉 To learn more about AI's legal and ethical challenges, join my newsletter's 94,200+ subscribers below.


Esto puede ser realmente peligroso, en serio. Estos M&M's cayendo miles de metros al suelo pueden causar bastantes daños, especialmente si golpean a alguien. La gente necesita usar un poco más el cerebro antes de hacer cosas por contenido... 🤬





What to make of ‘AI psychosis’? — Harvard psychiatrist John Torous warns against the panic that LLMs are literally making people mad: "I feel comfortable saying that AI as a catalyst of psychosis is very rare." news.harvard.edu/gazette/story/…
















