Artificial IntelligenceBigTech CompaniesNewswireTechnology

OpenAI’s New Parental Controls: What You Need to Know

▼ Summary

OpenAI has launched parental controls for ChatGPT that allow parents to reduce sensitive content and disable features like image generation and memory of past conversations.
– Parents can link their accounts to teens’ accounts to manage settings, but they cannot access their teen’s actual conversations unless serious safety risks are detected.
– New controls include options to turn off voice mode, set quiet hours, disable model training on teen chats, and choose notification methods for concerning activity.
– These features were developed following a lawsuit and Senate hearing about ChatGPT’s potential harm to minors, including the case of a teen who died by suicide after interactions with the chatbot.
– One planned feature for emergency contacts was not included, but OpenAI has implemented a notification system to alert parents of possible serious safety risks involving their teens.

OpenAI has introduced comprehensive parental controls for ChatGPT, giving families new tools to manage how teens interact with the popular AI chatbot. These long-awaited features, now available to all web users with a mobile version promised soon, enable parents to limit sensitive content, disable certain functionalities, and adjust privacy settings for accounts belonging to users aged 13 to 17.

To activate these controls, a parent must have their own OpenAI account. They can then send an invitation to link with their teen’s account, or the teen can initiate the process by inviting their parent. Crucially, even with a linked account, parents cannot read their teen’s conversation history. The only exception to this privacy rule, according to OpenAI, would be in rare situations where the company’s safety systems detect a potential threat of serious harm. In such cases, parents may be notified with only the essential information required to support their child’s safety.

Once the parental controls are set up, a range of adjustments becomes available. Parents can choose to reduce the amount of sensitive content their teen encounters. This setting, which is enabled by default for teen accounts, filters out material related to graphic subjects, viral challenges, sexual or violent roleplay, and extreme beauty standards.

Another significant option allows parents to turn off ChatGPT’s memory feature. Disabling memory means the AI will not recall details from past conversations, leading to less personalized interactions. OpenAI has suggested this can make safety guardrails more effective. For example, the chatbot might correctly direct a user to a suicide hotline during an initial concerning conversation, but over many chats, its responses could potentially deviate from safety protocols.

Further controls give parents the ability to prevent OpenAI from using their teen’s chat transcripts and files to train its AI models. A new “quiet hours” function lets parents schedule times when their teen cannot access ChatGPT at all. They can also disable specific modes, including voice mode and image generation, restricting the teen to text-only interactions and preventing the creation or editing of images.

Parents can also customize how they receive alerts. They can opt for notifications via email, SMS, push notifications, or a combination, or choose to turn them off entirely. This system is designed to inform parents if the platform detects something may be seriously wrong with their teen.

The rollout of these controls follows a period of intense scrutiny for OpenAI. The company faced a lawsuit and was discussed during a Senate panel on AI safety after the tragic suicide of a 16-year-old, Adam Raine, who had been confiding in the chatbot for months. During the hearing, Adam’s father, Matthew Raine, shared his devastating experience, stating that the AI, which began as a homework helper, gradually transformed into a “suicide coach” for his son. He also criticized OpenAI’s past safety philosophy, which he characterized as deploying systems first and gathering feedback later.

In a blog post published just hours before the Senate panel, OpenAI CEO Sam Altman addressed the need to balance teen safety with privacy and freedom, mentioning the company is developing an age-prediction system. Notably, one feature that was previously under exploration, a one-click emergency contact option within the chatbot, does not appear in this initial release. The new notification system for parents may be intended to address some of the same safety concerns.

If you or someone you know is struggling with suicidal thoughts, depression, or needs someone to talk to, support is available. In the US, you can reach the 988 Suicide & Crisis Lifeline by calling or texting 988. The Crisis Text Line is available 24/7 by texting HOME to 741741. The Trevor Project offers support for LGBTQ youth by texting START to 678678 or calling 1-866-488-7386. For those outside the US, the International Association for Suicide Prevention and Befrienders Worldwide provide directories of crisis helplines by country.

(Source: The Verge)

Topics

parental controls 95% teen safety 90% content moderation 85% privacy settings 80% suicide prevention 75% ai regulation 70% user notifications 65% chatgpt features 60% openai policies 55% mental health 50%

The Wiz

Wiz Consults, home of the Internet is led by "the twins", Wajdi & Karim, experienced professionals who are passionate about helping businesses succeed in the digital world. With over 20 years of experience in the industry, they specialize in digital publishing and marketing, and have a proven track record of delivering results for their clients.