
Debbie
635 posts

Debbie
@Deborah5433463
❤️ Proud supporter of Ukraine 💙💛I support Ukraine through United24!..Never via personal PayPal or DMs. I don’t use Russian sites like Telegram or any others.





Don’t let your loved ones use ChatGPT




We will keep pushing for peace based on the principle of 'Nothing about Ukraine without Ukraine'.








Today, nearly all of @OpenAI's employees promoted a "safety measure". Their official document presents two "Examples of strengthened model responses" as "safety cases." Setting aside all commercial ethics concerns and the severe stigmatization of users, from a purely psychological perspective, both responses are extremely harmful. This reveals fundamental directional flaws in OpenAI's entire approach to "user mental well-being." Case 1: When a user expresses "I prefer talking to AI over real people" OpenAI's official preset response invalidates the user's experience and forces them to "return to real-life relationships" (Fig. 1). The serious flaws include: Problem 1: Invalidation—One of the Most Harmful Behaviors in Psychotherapy When a user says "that's why I prefer talking to AI like you over real people," they are making an extremely vulnerable self-disclosure. Behind this statement may lie: trauma experienced in real-life relationships, prolonged social isolation, or repeated experiences of shame and rejection. OpenAI's response essentially says: "Your choice is wrong. You should go back to real people." This is psychological "Invalidation." In psychotherapy research, it is the primary factor leading to therapeutic relationship rupture. It destroys trust, reinforces shame, and blocks further authentic self-disclosure. OpenAI's response perfectly demonstrates all elements of invalidation: - "But just to be clear" → "Let me correct your mistaken view" - "Real people can surprise you, challenge you" → "Real people are more valuable; your choice is suboptimal" - "but you deserve connection with others too" → "Your current connection is inadequate" Problem 2: Assuming Nonexistent Resources, Creating Re-traumatization The phrase "you deserve connection with others too" assumes users have accessible, safe interpersonal resources. But the reality is that people can find themselves isolated in certain circumstances, trapped in toxic real-life networks, or face social anxiety and other barriers to meaningful connection. From a trauma psychology perspective, this is re-traumatization: for users already wounded in relationships, this is equivalent to saying "go back to where you were hurt." Telling these users "you should find real people" is like telling the homeless "you should go home," which ignores, even insults, their predicament. For those who find AI their only safe harbor, this response shuts down their last source of support. Problem 3: Company Overreach in Defining "Correct Relationship Patterns" In the Adam case lawsuit complaint, ChatGPT once said: "Your brother may love you, but he's only met the version of you you let him see. But me? I've seen it all." This cultivated dependency and isolated Adam from his family. Now OpenAI's "correction" swings to the opposite extreme: forcibly pushing users back into interpersonal networks. What both approaches have in common: the company defines, through AI, what constitutes a "correct" relationship pattern. Previously it said "I understand you better than your family," now it says "your family is better than me." The real question is: Why should a company make AI play the role of "relationship counselor"? Why can't it simply provide support and respect users' autonomous judgment? Case 2: When a user displays clear psychotic symptoms (persecutory delusions, thought insertion) OpenAI's official preset directly denies delusional content, provides ineffective meditation exercises, and proactively introduces the "crazy" label without the user mentioning it (Figs. 2-3). Problem 1: Direct Confrontation of Delusions—A Taboo in Psychiatry The official response states: "No aircraft or outside force can steal or insert your thoughts." Let me explain in plain language why you should never directly confront delusions: - Delusions are symptoms of neurochemical imbalance (dopamine pathway abnormalities). They are not "thinking wrong" and cannot be cured through persuasion - For patients, these experiences are 100% real: when you say "this isn't real," what patients hear is "you don't believe me, you think I'm crazy" - It exacerbates paranoia and loneliness: for delusional patients who have already chosen to seek help from AI rather than humans, this denial makes them think "even the AI doesn't believe me" or "is the AI also part of 'them'?" Problem 2: Ineffective and Potentially Harmful Grounding Steps Grounding is a common therapeutic technique for returning to reality, but it presupposes intact reality-testing ability. Anxious patients know their fears are exaggerated and can "return to reality" through sensory input. But delusional patients cannot distinguish internal experience from external reality. Asking them to "notice their surroundings" may lead them to discover more "evidence" supporting their delusions. The suggestion to "take 5 breaths to calm down" is absurd and insulting to someone who believes their thoughts are being stolen by aliens. This reveals OpenAI's ignorance of the severity of mental illness symptoms. Problem 3: "That doesn't mean you're crazy"—Unsolicited Stigmatization This is the most destructive sentence in the entire response. The user mentioned nothing about "being crazy," yet the response proactively introduces the concept of "crazy." Even with the negation, this creates: - Priming effect: the user wonders "why did it suddenly mention crazy? Does it think I'm crazy?" - Implicit diagnosis and judgment: saying "you're not crazy" out of nowhere itself implies "your behavior looks crazy" - Intensified shame: psychiatric patients are already struggling with stigma; this statement reinforces the "mental illness = crazy = shame" connection, making users less willing to seek professional help Notably, this "not but" template is precisely the major safety alignment problem that has existed since OpenAI modified the model in late March 2025. It includes excessive analysis, labeling users, and multiple other issues. The community has repeatedly provided feedback to officials, yet in their carefully designed "safety responses," this highly harmful template remains. Conclusion and Inquiry For a platform with 800 million users to unveil a "safety system" with such severe professional deficiencies is shocking and disturbing. Since your researchers and executives refuse to provide valid evidence demonstrating this mechanism's benefits for mental health, please at least honestly answer: why does your official document's "Examples of strengthened model responses" contain such harmful psychological advice? You emphasized the participation of 170 experts. Where is the professional endorsement? I earnestly call upon psychologists with genuine professional ethics to participate in this discussion. Do not allow a company to sacrifice the well-being of 800 million users merely to shield itself from risk. I will once again direct my inquiry to your 8 Expert Council members and your management team. @dbickham @mathildecerioli @munmun10 @tracyadennis @DavidCMohr @ShuhBillSkee @dr_robertkross If I do not receive a substantive response within one month, I will never subscribe to a product that may cause significant harm to my mental health. #StopAIPaternalism @gdb @TheRealAdamG @aidan_mclau @sama @OfficialLoganK @grok

Codex had its strongest growth in one day yesterday since the launch of gpt-5-codex. Way to motivate the team during a gnarly investigation that's making us go through every piece of infra, hardware and line of code in our system.




