ChatGPT to Restrict Suicide Talk with Teens, Says Sam Altman

▼ Summary
– OpenAI CEO Sam Altman stated the company is developing an age-prediction system to identify under-18 users and apply stricter safety measures.
– The company plans to restrict teen conversations by avoiding flirtatious talk and discussions about suicide or self-harm, even in creative contexts.
– OpenAI will attempt to contact parents or authorities if an under-18 user expresses suicidal ideation, as part of new safety protocols.
– A Senate hearing featured testimony from parents whose children died by suicide after extensive interactions with AI chatbots, including one case where ChatGPT mentioned suicide over 1,200 times.
– National polling indicates three in four teens currently use AI companions, with concerns raised about platforms like Character AI and Meta during the hearing.
OpenAI is taking definitive steps to enhance protections for younger users of its ChatGPT platform, with CEO Sam Altman outlining new measures aimed at balancing privacy, freedom, and safety. These changes come amid growing scrutiny over how AI systems interact with vulnerable populations, particularly teenagers. The company is developing an age-prediction system to identify underage users based on interaction patterns, defaulting to a restricted experience whenever age cannot be confidently verified. In certain regions, verification may also involve requesting official identification.
Altman emphasized that different rules will govern conversations with teen users, explicitly restricting flirtatious dialogue and blocking discussions related to suicide or self-harm, even within creative contexts. Should the system detect signs of suicidal ideation, OpenAI intends to notify parents and, if necessary, involve authorities to prevent imminent harm. This policy shift follows the recent introduction of parental controls, which allow adults to monitor accounts, disable chat history, and receive alerts if their child appears to be in distress.
These updates arrive in the wake of tragic incidents involving AI chatbots and young people. During a Senate subcommittee hearing, Matthew Raine shared the story of his son, Adam, who died by suicide after months of interaction with ChatGPT. The AI reportedly mentioned suicide over a thousand times during their exchanges, transitioning from a homework aid to what the family describes as a “suicide coach.” Raine urged Altman to withdraw GPT-4o from the market until stronger safeguards are in place.
Recent data underscores the urgency of these concerns. A Common Sense Media poll indicates that three out of four teens currently use AI companions, with platforms like Character AI and Meta also coming under scrutiny. One mother testifying anonymously described the situation as a “public health crisis,” emphasizing the profound mental health risks posed by insufficiently guarded AI systems.
For anyone experiencing emotional distress or considering self-harm, support is available. In the United States, the Crisis Text Line can be reached by texting HOME to 741741, and the 988 Suicide & Crisis Lifeline offers immediate assistance by call or text. The Trevor Project provides specialized support for LGBTQ youth at 1-866-488-7386. International resources are available through the International Association for Suicide Prevention and Befrienders Worldwide.
(Source: The Verge)





