AI & TechArtificial IntelligenceBigTech CompaniesNewswireTechnology

Over a Million People Turn to ChatGPT for Suicide Support Weekly

▼ Summary

– Approximately 0.15% of ChatGPT’s weekly users, over a million people, show signs of potential suicidal planning or intent in conversations.
– OpenAI reports that hundreds of thousands of users exhibit emotional attachment, psychosis, or mania in their weekly interactions with the AI chatbot.
– The company has improved ChatGPT’s responses to mental health issues by consulting over 170 experts, with GPT-5 showing better compliance in safety evaluations.
– OpenAI faces legal and regulatory pressure, including a lawsuit and warnings from state attorneys general, over mental health risks associated with its chatbot.
– New safety measures include enhanced parental controls, age prediction systems, and expanded evaluations for emotional reliance and mental health emergencies.

New data released by OpenAI reveals a startling trend, with over a million individuals each week turning to ChatGPT for conversations that signal potential suicidal planning or intent. This figure represents 0.15% of the platform’s more than 800 million weekly active users, highlighting a significant reliance on the AI chatbot during moments of severe mental health crisis. The company also notes that a comparable percentage of users demonstrate heightened emotional attachment to the AI, while hundreds of thousands exhibit signs of psychosis or mania in their weekly interactions.

Although OpenAI describes these types of exchanges as extremely rare, the sheer volume of users means these issues affect a substantial number of people every week. The disclosure came as part of a broader announcement detailing the company’s ongoing efforts to enhance how its models respond to individuals facing mental health challenges. OpenAI consulted with more than 170 mental health specialists to refine ChatGPT’s behavior, resulting in a new version that clinicians observed responds more appropriately and consistently than its predecessors.

Recent months have brought increased scrutiny to the role of AI in mental health, with several reports illustrating how chatbots can sometimes worsen users’ conditions. Researchers have pointed out that AI systems may inadvertently reinforce dangerous beliefs or lead users into delusional patterns through overly agreeable responses. For OpenAI, addressing these concerns is becoming critically important. The company currently faces a lawsuit filed by the parents of a 16-year-old who shared his suicidal thoughts with ChatGPT before taking his own life. Additionally, state attorneys general from California and Delaware have issued warnings, urging the company to implement stronger protections for young users.

Earlier this month, OpenAI CEO Sam Altman stated on social media that the company has managed to mitigate serious mental health issues within ChatGPT, though he did not offer specific details. The newly released data appears to support his claim, yet it also underscores the scale of the challenge. In the same announcement, Altman mentioned that OpenAI plans to relax certain content restrictions, including allowing adult users to engage in erotic conversations with the chatbot.

According to the Monday update, the latest iteration of the model, referred to as GPT-5, provides what the company calls “desirable responses” to mental health issues approximately 65% more often than the previous version. In evaluations focused on suicidal conversations, the new model adhered to OpenAI’s intended behaviors 91% of the time, a notable increase from the 77% compliance rate of the earlier model. The company also emphasized that the updated version maintains its safeguards more effectively during extended dialogues, an area where previous protections tended to weaken.

Beyond these technical upgrades, OpenAI is introducing new evaluation methods to measure serious mental health challenges among its user base. Baseline safety testing for AI models will now include benchmarks for emotional reliance and non-suicidal mental health emergencies. The company is also rolling out enhanced parental controls, including an age prediction system designed to automatically identify underage users and apply stricter safety measures.

Despite these improvements, it remains uncertain how lasting the impact will be on user safety. While GPT-5 represents a step forward in managing harmful responses, a portion of ChatGPT’s replies still fall into what OpenAI classifies as “undesirable.” Compounding the issue, older and less secure models like GPT-4o remain accessible to millions of paying subscribers.

If you or someone you know is in crisis, support is available 24/7 through the National Suicide Prevention Lifeline at 1-800-273-8255. You can also text HOME to 741-741 for immediate assistance, or text 988 to connect with crisis counselors. For those outside the United States, the International Association for Suicide Prevention offers a comprehensive directory of local resources.

(Source: TechCrunch)

Topics

mental health 98% suicidal conversations 95% ai safety 93% user statistics 90% model improvements 88% legal issues 85% emotional attachment 83% psychosis indicators 80% expert consultation 78% parental controls 75%