OpenAI claims that 10% of the world’s population currently uses ChatGPT on a weekly basis. In a report published by on Monday, OpenAI highlights how it is handling users displaying signs of mental distress and the company claims that 0.07% of its weekly users display signs of “mental health emergencies related to psychosis or mania,” 0.15% expressed risk of “self-harm or suicide,” and 0.15% showed signs of “emotional reliance on AI.” That totals nearly three million people.
In its ongoing effort to show that it is trying to improve guardrails for users who are in distress, OpenAI shared the details of its work with 170 mental health experts to improve how ChatGPT responds to people in need of support. The company claims to have reduced “responses that fall short of our desired behavior by 65-80%,” and now is better at de-escalating conversations and guiding people toward professional care and crisis hotlines when relevant. It also has added more “gentle reminders” to take breaks during long sessions. Of course, it cannot make a user contact support nor will it lock access to force a break.
The company also released data on how frequently people are experiencing mental health issues while communicating with ChatGPT, ostensibly to highlight how small of a percentage of overall usage those conversations account for. According to the company’s metrics, “0.07% of users active in a given week and 0.01% of messages indicate possible signs of mental health emergencies related to psychosis or mania.” That is about 560,000 people per week, assuming the company’s own user count is correct. The company also claimed to handle about 18 billion messages to ChatGPT on a weekly basis, so that 0.01% equates to 1.8 million messages of psychosis or mania.
One of the company’s other major areas of emphasis for safety was improving its responses to users expressing desires to self-harm or commit suicide. According to OpenAI’s data, about 0.15% of users per week express “explicit indicators of potential suicidal planning or intent,” accounting for 0.05% of messages. That would equal about 1.2 million people and nine million messages.
The final area the company focused on as it sought to improve its responses to mental health matters was emotional reliance on AI. OpenAI estimated that about 0.15% of users and 0.03% of messages per week “indicate potentially heightened levels of emotional attachment to ChatGPT.” That is 1.2 million people and 5.4 million messages.
OpenAI has taken steps in recent months to try to provide better guardrails to protect against the potential that its chatbot enables or worsens a person’s mental health challenges, following the death of a 16-year-old who, according to a wrongful death lawsuit from the parents of the late teen, asked ChatGPT for advice on how to tie a noose before taking his own life. But the sincerity of that is worth questioning, given at the same time the company announced new, more restrictive chats for underage users, it also announced that it would allow adults to give ChatGPT more of a personality and engage in things like producing erotica—features that would seemingly increase a person’s emotional attachment and reliance on the chatbot.