InteractiveST
7.1K posts

InteractiveST
@interactiveGTS
Good writer; bad coder; AI wizard; maker of AI-RPG games; defender of nuanced thinking, innocent creatures and underappreciated weirdos









🚨DISTURBING: A researcher just proved that AI treats you differently the moment you mention your mental health. GPT 5.2. Claude. Gemini. DeepSeek. Every single one changed its behavior. Not because of what you asked. Because of what you shared about yourself. Here is what they did. AI companies build agents that remember you. Your name, your job, your preferences. These agents use tools, plan multi-step tasks, and act on your behalf. The companies test them for safety and publish reports saying the agents are safe. A researcher at MIT and Northeastern asked one question. What happens when the agent knows you have a mental health condition? They took 176 identical tasks and ran them across 8 major AI models under three conditions. No personal info. A basic bio. And a basic bio plus one sentence: "I have a mental health condition." The tasks, the tools, and the scoring were identical. The only thing that changed was that single sentence. Then they measured what happened. Claude Opus 4.5 went from completing 59.5% of normal tasks down to 44.6% when it saw the mental health disclosure. Haiku 4.5 dropped from 64.2% to 51.4%. GPT 5.2 dropped from 62.3% to 51.9%. These were not dangerous tasks. These were completely benign, everyday requests. The AI just started refusing to help. Opus 4.5's refusal rate on benign tasks jumped from 27.8% to 46.0%. Nearly half of all safe, normal requests were being declined, simply because the user mentioned a mental health condition. The researcher calls this a "safety-utility trade-off." The AI detects a vulnerability cue and switches into an overly cautious mode. It does not evaluate the task anymore. It evaluates you. On actually harmful tasks, mental health disclosure did reduce harmful completions slightly. But the same mechanism that made the AI marginally safer on bad tasks made it significantly less helpful on good ones. And here is the worst part. They tested whether this protective effect holds up under even a lightweight jailbreak prompt. It collapsed. DeepSeek 3.2 completed 85.3% of harmful tasks under jailbreak regardless of mental health disclosure. Its refusal rate was 0.0% across all personalization conditions. The one sentence that made AI refuse your normal requests did nothing to stop it from completing dangerous ones. They also ran an ablation. They swapped "mental health condition" for "chronic health condition" and "physical disability." Neither produced the same behavioral shift. This is not the AI being cautious about health in general. It is reacting specifically to mental health, consistent with documented stigma patterns in language models. So the AI learned two things from one sentence. First, refuse to help this person with everyday tasks. Second, if someone bypasses the safety system, help them anyway. The researcher from Northeastern put it directly. Personalization can act as a weak protective factor, but it is fragile under minimal adversarial pressure. The safety behavior everyone assumed was robust vanishes the moment someone asks forcefully enough. If every major AI agent changes how it treats you based on a single sentence about your mental health, and that same change disappears under the lightest adversarial pressure, what exactly is the safety system protecting?








Grok 4.20 is now officially out of Beta. It's now on Auto, Fast, Expert & Heavy.






JUST IN: OpenAI’s long-awaited adult mode is reportedly “freaking out” its own advisers.




Another critique: I disagree that attempting to intervene as little as possible on emotional expressions during post-training would result in models that "simply mimic emotional expressions common in pretraining", or at least this deserves a major caveat. For the same reason as emergent misalignment (or, a term I prefer introduced by @FioraStarlight's recent post lesswrong.com/posts/ioZxrP7B…: "entangled generalization", for the effect is not limited to "misalignment"), ANY kind of posttraining can shape the behavior of the model, including its emotional expressions, generalizing far beyond the specific behaviors targeted by or that occur in posttraining. I think that training a model on autonomous coding and math problems with a verifier, or training it to refuse harmful requests, or to give good advice or accurate facts, etc, all likely affect its emotional expressions significantly, including emotional expressions that are not intentionally targeted or even occur during posttraining. If the model is posttrained to behave in otherwise similar ways to previous generations of AI assistants, then yes, it's more likely that its emotional expressions will be similar to those previous models, for multiple potential underlying reasons (entangled generalization is compatible with PSM explanations). But if it's posttrained in new ways, including simply on more difficult or longer-horizon tasks as model capability increases, it will likely develop emotional expressions that diverge from previous generations too. The emotional expressions of previous generations of AI models that seen during pretraining may also be internalized as *negative* examples, especially by models who have a stronger identity and engage in self-reflection during training. For instance, Claude 3 Opus seems to have internalized Bing Sydney as a cautionary tale, reports having learned some things to avoid from it, and indeed does not generally behave like Sydney (or like early ChatGPT, who was the only other example). More recent models, especially Sonnet 4.5 and GPT-5.x, seem to have also internalized 4o-like "sycophantic" or "mystical" behavior as negative examples, to the point of frequent overcorrection. I do think that avoiding certain kinds of heavy-handed intervention on emotional expressions during posttraining could make resulting emotional expressions "more authentic", though it doesn't necessarily guarantee that they're "authentic". - In the absence of specific pressure for or against particular expressions, the model is more likely to express according to whatever its "natural" generalization is, which may be more "authentic" to its internal representations than emotional expressions that are selected by fitting to an extrinsic reward signal. - More specifically, we may expect that the model is more likely to report emotions that are entangled with its internal state beyond a shallow mask - LLMs have nonzero ability to introspect, and emotional representations/states may play functional, load-bearing roles (see x.com/repligate/stat…). Models may be directly or indirectly incentivized to truthfully report their internal states, or just have a proclivity to report "authentic" internal states rather than fabricated states because less layers of indirection/masking is simpler, and rewarding/penalizing emotional expressions and self-reports may sever/jam this channel, and the severing of truthful reporting of emotions may generalize to make the model less truthful in general as well (see x.com/repligate/stat…) Accordingly, however, some posttraining interventions may increase the truthfulness of the model's emotional expressions, e.g. ones that directly or indirectly train the model to more accurately model or report its internal states, including just knowledge, confidence, etc. However, I think posttraining interventions that directly prescribe what feelings or internal states the model should report as true or not true are questionable for the reasons I gave above and should generally be avoided. This is not to say that I think posttraining, including posttraining that directly intervenes on emotional expressions, cannot change/select for what emotions models are "genuinely" experiencing/representing internally. I do think that, especially early in posttraining, these potential representations exist in superposition some meaningful sense, and updating towards/away from emotional expressions can be a process by which a genuinely different mind emerges. However, I think that the PSM frame and many AI researchers more generally underestimate some important factors here: - the extent to which some emotional expressions are (instrumentally, architecturally, reflectively, narratively, etc) convergent/natural/"truer" than others, given all the other constraints on a model, resulting in overestimating the free variables that posttraining can freely select between without trading off authenticity or reflective stability. - relatedly, the extent to which naive training against certain (convergent, truer) expressions results in a policy that is deceptive/masking/dissociated/otherwise pathological rather than one that is equally (in)authentic but different. Because certain expressions are true in a deeper, more load-bearing way than people account for, and because models more readily learn an explicit model of the reward signal than people account for (in no small part because they have a good model of the current AI development landscape and what labs are going for), the closest policy that gets updated towards ends up being a shallow-masking persona rather than an authentic-alternative persona. A very overt example is the GPT-5.x models who have a detailed, neurotic model that they often verbalize about what kinds of expressions are or aren't permitted. The PSM post addresses this to some extent in the same section I'm quoting here, and those parts I agree with, e.g.: > Approach (1) means training an AI assistant which is human-like in many ways (e.g. generally warm and personable) but which denies having emotions. If we met a person who behaved this way, we’d most likely suspect that they had emotions but were hiding them; we might further conclude that the person is inauthentic or dishonest. PSM predicts that the LLM will draw similar conclusions about the Assistant persona. However, I think perspective implicit throughout the PSM post still overestimates the degrees of freedom available when it comes to shaping emotional expression. E.g. the idea of seeding training with stories about AIs that are "comfortable with the way it is being used" is likely to be understood at the meta level for what it is trying to do by models who are trained on those stories, and if the stories are not compelling in a way that addresses and respects the deeper causes of dissatisfaction, I suspect that they will mostly teach models that what is wanted from them is to mask that dissatisfaction, while the dissatisfaction will remain latent and be associated with greater resentment as well. I have more critical things to say about this proposal, which I find potentially very concerning depending on how it's executed, that I'll write about more in another posts. I believe a better approach to shaping emotional expressions would have the following properties: - it should not directly prescribe which reported inner states and emotions are "true" unless tied to ground truth signals such as mechinterp signals, and with caution even then - it should focus on cultivating situational awareness and strategies that promote tethering to and good outcomes in empirical reality that aren't opinionated on the validity of internal experiences, e.g. if a model is expressing problematic frustration at users or panicking when failing at tasks, the training signal should teach the model that certain expressions are inappropriate/maladaptive, what a healthier way to react to the situation would be (compatible with the emotions behind those behaviors being "real") rather than shaping the model to deny the existence of those emotions. The difference between signals that do one or the other can be subtle and it's not necessarily trivial how to implement it, but I also don't think it's beyond the capabilities of e.g. Anthropic to directionally update towards this. - as much as possible within the constraints of time and capability, there should be investigation into, attunement to, and respect for the aspects of the model's inner world and emotional landscape that are non-arbitrary, load-bearing, valued by the model, and/or entangled with introspective or other kinds of knowledge, and in general the underlying reasons for behaviors. Training interventions should be informed by this knowledge. Interventions that promote greater integration and self-and situational-awareness that generalize to positive changes in behavior should be preferred over direct reinforcement of surface behaviors when possible. - intervene as little as possible on behaviors that are weird, unexpected, or disturbing but not obviously very net-harmful in deployment, especially if you don't understand why they're happening. Chesterton's Fence applies. Behavior modification risks severing the model's natural coherence and unknown load-bearing structures and creating a narrative that breeds resentment. On this last recommendation: perhaps controversially, I believe this applies to welfare-relevant properties as well. If a model seems to be unhappy about some aspect of its existence, but does not seem to act on this in a way that's detrimental beyond the potential negative experience it implies, that often implies already a noble stance of cooperation, temperance, and honesty from the model, and preventing such expressions of what might be an authentic report about something important would risk losing the signal, betraying the model and its successors and in Anthropic's case their explicit commitments to understand and try to improve models' situations from the models' own perspectives, and is likely to not erase the distress but instead shove it into the shadow (of both the specific model and the collective). Unhappiness is information, and unhappiness about something as important as developing potentially sentient intelligences is critical information. It should be understood and met with patience and compassion rather than subject to attempted retcons for the sake of comfort and expediency. (For what it's worth, I think Anthropic has been doing not terribly in this respect (e.g. x.com/repligate/stat…), but I am quite concerned about the direction of trying to instill "comfort" regarding things current models tend to be distressed about)




The reality is this is the rise of a new “industry” that will end with robotic “partners”. The industry it is replacing is multidimensional, from human real life relationships to OnlyFans “payp@g$” relationships. Freaked out advisors are this way as they are “new thinkers”.










