OpenAI Adds Parental Controls to ChatGPT Next Month

▼ Summary
– OpenAI is introducing parental controls for ChatGPT within the next month in response to concerns about teen safety and self-harm incidents.
– Parents can link their account to their teen’s via email and control age-appropriate model behavior, which is enabled by default.
– Features include managing disabled options like memory and chat history, plus receiving notifications during teen distress moments.
– The company is guided by expert councils and a Global Physician Network to inform safety, well-being, and product decisions.
– These controls build on existing user features and are part of OpenAI’s ongoing effort to enhance safety and support.
Parents will soon have greater oversight over how their teenagers interact with ChatGPT, as OpenAI prepares to roll out new parental controls next month. This update allows families to set boundaries and ensure safer, more appropriate usage of the AI assistant, addressing growing concerns about digital well-being among younger users.
Through a straightforward email invitation, parents can link their own account to their teen’s, provided the user is at least 13 years old. Once connected, caregivers gain the ability to customize how ChatGPT responds, applying age-appropriate behavior rules that are enabled by default. They can also disable specific features such as memory and chat history, tailoring the experience to match their household’s values and needs.
An especially notable addition is a notification system that alerts parents when the AI detects signs of acute emotional distress in a teen’s conversation. Developed with guidance from mental health and youth development specialists, this function aims to foster trust and provide timely support during vulnerable moments.
These family-focused tools build on existing safeguards already available to all users, including prompts that encourage breaks during extended sessions. OpenAI emphasized that these measures represent an initial phase in a broader, ongoing effort to enhance safety and usefulness across its platform.
Although the company did not directly connect the new controls to recent reports involving AI and teen self-harm, it acknowledged that tragic incidents have reinforced its commitment to refining how its models recognize and respond to mental health cues. “We’ve seen people turn to ChatGPT in the most difficult moments,” the company stated, underscoring its dedication to continuous improvement guided by expert insight.
To inform its approach, OpenAI is collaborating with two key advisory bodies: the Expert Council on Well-Being and AI, and the Global Physician Network. The council includes specialists in youth development, mental health, and human-computer interaction, helping shape a research-backed framework for how AI can positively impact users’ lives.
Meanwhile, the Global Physician Network, a group of over 250 doctors from 60 countries, provides diverse medical perspectives, particularly in evaluating the AI’s capabilities in health-related contexts. Together, these groups assist in defining well-being metrics, setting priorities, and designing future protective features.
OpenAI reiterated that while expert input guides its decisions, the company retains full accountability for its products and policies. The rollout of parental controls is one step in a larger, evolving strategy to balance innovation with responsibility, especially when it comes to protecting younger users.
(Source: Economy Middle East)