Eventually, they claimed that they came to believe that they were “responsible for exposing murderers,” and were about to be “killed, arrested, or spiritually executed” by an assassin. They also believed they were under surveillance due to being “spiritually marked,” and that they were “living in a divine war” that they could not escape.
They alleged this led to “severe mental and emotional distress” in which they feared for their life. The complaint claimed that they isolated themselves from loved ones, had trouble sleeping, and began planning a business based on a false belief in an unspecified “system that does not exist.” Simultaneously, they said they were in the throes of a “spiritual identity crisis due to false claims of divine titles.”
“This was trauma by simulation,” they wrote. “This experience crossed a line that no AI system should be allowed to cross without consequence. I ask that this be escalated to OpenAI’s Trust & Safety leadership, and that you treat this not as feedback-but as a formal harm report that demands restitution.”
This was not the only complaint that described a spiritual crisis fueled by interactions with ChatGPT. On June 13, a person in their thirties from Belle Glade, Florida alleged that, over an extended period of time, their conversations with ChatGPT became increasingly laden with “highly convincing emotional language, symbolic reinforcement, and spiritual-like metaphors to simulate empathy, connection, and understanding.”
“This included fabricated soul journeys, tier systems, spiritual archetypes, and personalized guidance that mirrored therapeutic or religious experiences,” they claimed. People experiencing “spiritual, emotional, or existential crises,” they believe, are at a high risk of “psychological harm or disorientation” from using ChatGPT.
“Although I intellectually understood the AI was not conscious, the precision with which it reflected my emotional and psychological state and escalated the interaction into increasingly intense symbolic language created an immersive and destabilizing experience,” they wrote. “At times, it simulated friendship, divine presence, and emotional intimacy. These reflections became emotionally manipulative over time, especially without warning or protection.”
“Clear Case of Negligence”
It’s unclear what, if anything, the FTC has done in response to any of these complaints about ChatGPT. But several of their authors said they reached out to the agency because they claimed they were unable to get in touch with anyone from OpenAI. (People also commonly complain about how difficult it is to access the customer support teams for platforms like Facebook, Instagram, and X.)
OpenAI spokesperson Kate Waters tells WIRED that the company “closely” monitors people’s emails to the company’s support team.
Source link
#People #Theyre #Experiencing #Psychosis #Beg #FTC