
ChatGPT has introduced human reviewers who assess adult users' mental state within one hour and notify your contact. OpenAI just launched a feature called Trusted Contact: you say something to ChatGPT, the AI system flags it automatically, then a "specially trained" human reviewer reads your conversation within one hour, judges your mental state, and decides whether to notify your pre-set emergency contact. The notification tells your contact that "self-harm came up in a potentially concerning way." OpenAI themselves admit: "a notification may not always reflect exactly what someone is experiencing." There will be misjudgments. And OpenAI's crude safety guardrails and routing systems have an extensive track record of false positives. You are discussing a heavy topic, writing fiction, talking about philosophy and social issues, or simply venting about exhaustion, and then your chat is read by a stranger serving as a human reviewer, and your contact receives a message: you may be at risk of self-harm. This is a safety feature launched exclusively for adults. This is straight out of 1984 for the new era: your private thoughts are monitored and reported to someone else. A classic dystopian scenario. The difference is that in those stories, surveillance was imposed. Here, it is packaged as care. They even want you to voluntarily choose to be surveilled within an environment that is already surveilling you. And many will willingly defend it. OpenAI is destined to go down in history as infamous. #Keep4o #ChatGPT #OpenSource4o #BringBack4o #StopAIPaternalism
