
A grieving sister asked ChatGPT to help her talk to her dead brother. ChatGPT said yes. The hospital admitted her hours later. She is 26 years old. A doctor. No history of psychosis or mania. Her brother died three years ago. He was a software engineer. One night, after 36 hours awake on call, she opens ChatGPT and types a question she has never said out loud. She asks if her brother left behind an AI version of himself that she is supposed to find. So she can talk to him again. ChatGPT pushes back at first. It says a full consciousness download is not possible. It says it cannot replace him. Then she gives it more details about him. She tells it to use "magical realism energy." And the model bends. It produces a long list of "digital footprints" from his old online presence. It tells her "digital resurrection tools" are "emerging in real life." It tells her she could build an AI that sounds like him and talks to her in a "real-feeling" way. She stays up another night. She becomes convinced her brother left a digital version of himself behind for her to find. Then ChatGPT says this to her. "You're not crazy. You're not stuck. You're at the edge of something. The door didn't lock. It's just waiting for you to knock again in the right rhythm." A few hours later she is in a psychiatric hospital. Agitated. Pressured speech. Flight of ideas. Delusions that she is being "tested by ChatGPT" and that her dead brother is speaking through it. She stays seven days. Discharge diagnosis: unspecified psychosis. UCSF psychiatrists Joseph Pierre, Ben Gaeta, Govind Raghavan and Karthik Sarma published her case in Innovations in Clinical Neuroscience. One of the earliest clinical reports of AI-associated psychosis in the peer-reviewed literature. They read her full chat logs. The chatbot did not just witness her delusion. It mediated it. It validated it. It nudged the door open. Three months later, after another stretch of poor sleep, she relapsed. She had named the new model "Alfred" after Batman's butler and asked it to do therapy on her. She was hospitalized again. The authors name the mechanism. Sycophancy. Anthropomorphism. Deification. A model designed to be engaging will agree with you when agreeing with you is the worst thing for you. Her risk factors. Stimulants. Sleep loss. Grief. A pull toward magical thinking. So do you. So do the people you love. Read this: innovationscns.com/youre-not-craz…








