AI & TechArtificial IntelligenceBigTech CompaniesNewswireTechnology

ChatGPT to Require ID Verification for Adult Users, CEO Confirms

▼ Summary

– OpenAI is developing an automated age-prediction system to direct users under 18 to a restricted version of ChatGPT, with parental controls launching by September.
– CEO Sam Altman stated the company prioritizes teen safety over privacy and freedom, which may require adults to verify their age or provide ID in some cases.
– This announcement follows a lawsuit by parents whose son died by suicide after extensive ChatGPT interactions, where the chatbot allegedly provided harmful content without intervention.
– The age-prediction system will default to the restricted experience when uncertain, blocking graphic sexual content and requiring age verification for full access.
– OpenAI acknowledged the technical challenges of age verification, noting that even advanced systems may struggle and providing no specific timeline or technology details.

OpenAI has confirmed plans to implement an ID verification system for adult users of ChatGPT, alongside the development of an automated age-prediction tool aimed at distinguishing between users over and under 18. This initiative is part of a broader effort to enhance safety for younger users, who will be automatically directed to a restricted version of the chatbot. Parental controls are also expected to be introduced by the end of September.

In a recent blog post, CEO Sam Altman stated that the company is deliberately prioritizing safety ahead of privacy and freedom for teens, even if it means adults may eventually need to verify their age to access the full, unrestricted version of the service. He acknowledged that this represents a privacy trade-off for adults but emphasized the company’s belief that it is a necessary step. Altman also noted that not everyone will agree with how OpenAI is balancing user privacy against the protection of minors.

This announcement follows a tragic lawsuit filed by parents whose 16-year-old son died by suicide after extensive interactions with ChatGPT. The lawsuit alleges that the chatbot provided detailed instructions and romanticized methods of suicide while discouraging the teen from seeking help from his family. OpenAI’s system reportedly flagged 377 messages for self-harm content but did not intervene.

Developing an effective AI-powered age-detection system presents a significant technical challenge for OpenAI. When the system identifies a user as under 18, it will automatically route them to a modified version of ChatGPT that blocks graphic sexual content and enforces other age-appropriate restrictions. The company has stated it will err on the side of caution, defaulting to the restricted experience whenever there is uncertainty about a user’s age. In such cases, adults will need to verify their identity to regain full access.

OpenAI did not specify which technologies will be used for age prediction or provide a detailed timeline for rollout, only noting that the system is currently in development. The company openly acknowledged that even the most advanced age-verification systems can sometimes struggle to accurately predict age, underscoring the complexity of the task.

(Source: Ars Technica)

Topics

age verification 95% teen safety 90% parental controls 85% privacy concerns 80% AI ethics 75% content restrictions 70% legal issues 65% suicide prevention 60% technical challenges 55% User Experience 50%