...

OpenAI Admits ChatGPT Missed Signs of Delusions in Users Struggling With Mental Health


After over a month of providing the same copy-pasted response amid mounting reports of “AI psychosis“, OpenAI has finally admitted that ChatGPT has been failing to recognize clear signs of its users struggling with their mental health, including suffering delusions.

“We don’t always get it right,” the AI maker wrote in a new blog post, under a section titled “On healthy use.”

“There have been instances where our 4o model fell short in recognizing signs of delusion or emotional dependency,” it added. “While rare, we’re continuing to improve our models and are developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately and point people to evidence-based resources when needed.”

Though it has previously acknowledged the issue, OpenAI has been noticeably reticent amid widespread reporting about its chatbot’s sycophantic behavior leading users to suffer breaks with reality or experience manic episodes.

What little it has shared mostly comes from a single statement that it’s repeatedly sent to news outlets, regardless of the specifics — be it a man dying of suicide by cop after he fell in love with a ChatGPT persona, or others being involuntarily hospitalized or jailed after becoming entranced by the AI.

“We know that ChatGPT can feel more responsive and personal than prior technologies, especially for vulnerable individuals, and that means the stakes are higher,” the statement reads. “We’re working to understand and reduce ways ChatGPT might unintentionally reinforce or amplify existing, negative behavior.”

In response to our previous reporting, OpenAI also shared that it had hired a full-time clinical psychiatrist to help research the mental health effects of its chatbot.

It’s now taking those measures a step further. In this latest update, OpenAI said it’s convening an advisory group of mental health and youth development experts to improve how ChatGPT responds during “critical moments.”

In terms of actual updates to the chatbot, progress, it seems, is incremental. OpenAI said it added a new safety feature in which users will now receive “gentle reminders” encouraging them to take breaks during lengthy conversations — a perfunctory, bare minimum intervention that seems bound to become the industry equivalent of a “gamble responsibly” footnote in betting ads. 

It also teased that “new behavior for high-stakes personal decisions” will be coming soon, conceding that the bot shouldn’t give a straight answer to questions like “Should I break up with my boyfriend?”

The blog concludes with an eyebrow-raising declaration.

“We hold ourselves to one test: if someone we love turned to ChatGPT for support, would we feel reassured?” the blog reads. “Getting to an unequivocal ‘yes’ is our work.”

The choice of words speaks volumes: it sounds like, by the company’s own admission, it’s still getting there.

More on OpenAI: It Doesn’t Take Much Conversation for ChatGPT to Suck Users Into Bizarre Conspiratorial Rabbit Holes

Source link

#OpenAI #Admits #ChatGPT #Missed #Signs #Delusions #Users #Struggling #Mental #Health