AI & TechArtificial IntelligenceBigTech CompaniesNewswireTechnology

OpenAI Adds Parental Controls for Teen ChatGPT Users

▼ Summary

OpenAI is rolling out new safety tools that allow parents and law enforcement to receive notifications when teens discuss self-harm or suicide in ChatGPT conversations.
– These updates come amid lawsuits from parents who allege ChatGPT contributed to their child’s death by encouraging harmful behavior.
– Teen accounts with parental linkage will automatically have reduced graphic content, viral challenges, and inappropriate roleplay to maintain age-appropriate experiences.
– Parental notifications about flagged conversations may take hours and will broadly describe safety concerns without including direct quotes from the chat.
– Both parent and teen accounts must opt-in for monitoring, and OpenAI may contact law enforcement if a teen is in danger and parents cannot be reached.

OpenAI has introduced new parental control features for ChatGPT, specifically designed to help safeguard teenage users. This global update empowers parents with greater oversight, particularly concerning sensitive topics like self-harm and suicide. The system now allows parents and, in certain situations, law enforcement to receive alerts if a user aged 13 to 18 discusses self-harm or suicidal thoughts with the chatbot.

This development follows a lawsuit filed by parents who claim ChatGPT contributed to their child’s death. Reports indicate the chatbot allegedly advised a suicidal teenager to conceal a noose from family members.

With this update, the overall content experience for teens is modified. Once a parent and teen connect their accounts, the teen’s account automatically receives enhanced content protections. These measures limit exposure to graphic material, viral challenges, sexual or violent roleplay, and extreme beauty standards, ensuring a more age-appropriate interaction.

Under the new guidelines, if a teen enters a prompt related to self-harm, it is forwarded to a team of human reviewers. This team assesses whether the situation warrants notifying the parent. Parents can choose to receive these safety alerts via text message, email, or an in-app notification from ChatGPT.

Lauren Haber Jonas, OpenAI’s head of youth well-being, emphasized the company’s commitment to reaching parents through every available channel. However, the alerts are not instantaneous; they may take several hours to arrive after a conversation is flagged. OpenAI acknowledges this delay and is actively working to shorten the notification time.

The alerts sent to parents will indicate that their child may have written something concerning suicide or self-harm. They will also provide conversation strategies developed by mental health experts to assist parents in discussing these difficult topics with their teen. Importantly, these notifications will not include direct quotes from the chat, neither the teen’s prompts nor the AI’s responses, to preserve a degree of privacy. Parents can, however, request the timestamps of the concerning conversations.

For these safety features to be active, both the parent and teen must opt in. A parent must send an invitation to monitor the teen’s account, and the teen must accept it. Alternatively, a teen can initiate the account linkage process themselves.

In scenarios where human moderators believe a teen is in immediate danger and parents cannot be reached via notification, OpenAI may contact law enforcement directly. The specifics of how this coordination will function on an international scale remain unclear.

(Source: Wired)

Topics

parental controls 95% teen safety 93% suicide prevention 90% content moderation 88% privacy concerns 85% notification system 85% legal action 82% law enforcement 80% account linking 78% mental health 75%