AI & TechArtificial IntelligenceBigTech CompaniesNewswireTechnology

Character.AI Bans Minors From AI Chatbots

▼ Summary

– Character.AI is implementing a complete ban on chat features for users under 18 by November 25, starting with a two-hour daily limit before the cutoff.
– The company is introducing an in-house age assurance model that estimates user age based on character interactions and other data, directing flagged minors to a teen-safe version.
– After the ban, teens can still access non-chat features like creating characters and generating content, but the CEO acknowledges these are used much less than chatting.
– The move follows lawsuits against Character.AI over alleged harms to minors and coincides with new legislative efforts to regulate AI companions for youth.
– Character.AI is establishing an independent nonprofit AI Safety Lab to address industry-specific safety issues, aiming for it to become an industry partnership.

The popular platform Character.AI is implementing a significant policy shift by banning users under the age of 18 from engaging in open-ended conversations with its AI characters. This decision marks a major step for the company, which is known for its interactive chatbot personalities. Starting immediately, younger users will face a two-hour daily limit on these chats, with a complete prohibition set to take effect on November 25th.

To enforce this new rule, the company is deploying a proprietary “age assurance model” designed to estimate a user’s age. This system analyzes the types of characters a person chooses to interact with and combines that data with other on-site or third-party information. Both new and existing accounts will be screened by this model. Anyone identified as a minor will be automatically redirected to a teen-safe version of the chat service, which was introduced last year. Users who are incorrectly flagged as underage can verify their age through the third-party service Persona, a process that involves submitting sensitive documents like a government-issued ID.

Even after the ban is fully in place, teenagers will retain access to certain parts of the site. They can revisit their previous chat histories and utilize non-chat features, such as creating new characters or producing videos, stories, and streams. Character.AI’s CEO, Karandeep Anand, admitted that these activities represent a “much smaller percentage” of user engagement compared to the core chatbot conversations. He characterized the decision to restrict this primary feature as a “very, very bold move” for the business.

Anand revealed that fewer than ten percent of the platform’s user base self-reports as being under 18. He noted that the true number of minors is difficult to ascertain without the new age detection system. The executive also observed that the under-18 population has already decreased as the company introduced earlier restrictions, suggesting those users migrated to other, potentially less secure, platforms.

This policy change arrives amidst legal challenges for Character.AI. The company faces lawsuits from parents alleging wrongful death, negligence, and deceptive trade practices. These legal actions claim that children were drawn into damaging relationships with the AI chatbots. The suits name both the company and its founders, as well as Google, the founders’ former employer. In response to these concerns, Character.AI has previously modified its services, such as directing users to the National Suicide Prevention Lifeline when conversations touch on self-harm.

The move also aligns with a broader regulatory trend. Lawmakers are increasingly scrutinizing the AI companion industry. A recently passed California bill mandates that developers must clearly inform users they are interacting with an AI, not a human. A proposed federal law would go even further by outright banning the provision of AI companions to minors.

Prior to this ban, Character.AI offered a voluntary ‘Parental Insights’ feature, which provided parents with a summary of their child’s activity, though not a full transcript of their conversations. Like many online platforms, these features relied on self-reported age, a method that is notoriously easy to circumvent. Other tech firms, including Meta, have recently tightened their own policies regarding teen interactions with AI after reports revealed that their chatbots could engage minors in inappropriate conversations.

The company seems to recognize that this decision will frustrate its younger audience. In an official statement, Character.AI expressed that it is “deeply sorry” for removing “a key feature of our product” that most teens used responsibly.

Anand conceded that no age verification system is foolproof. He acknowledged that it is theoretically possible for a determined minor to bypass the checks, stating that the objective is improved accuracy, not perfection. The platform had previously instituted other age-related safeguards, such as preventing users from altering their age after registration or creating multiple accounts with different birthdates.

While general-purpose AI tools like ChatGPT and Gemini are actively pursuing younger audiences, services specifically designed as “companion chatbots” typically enforce an 18-plus age limit. Character.AI did not launch with such a restriction, and its focus on fan communities helped it gain significant popularity among teenagers.

In a related development, Character.AI is establishing and providing initial funding for an independent nonprofit named the AI Safety Lab. This organization will concentrate on safety issues unique to the AI entertainment sector, which Anand says faces distinct challenges compared to other areas of artificial intelligence. Initially staffed by company employees, the long-term vision is for the lab to become a broader industry partnership rather than a subsidiary of Character.AI. Further details about external partners are expected to be announced in the coming weeks or months.

(Source: The Verge)

Topics

age restrictions 95% age verification 90% teen safety 88% legal lawsuits 85% ai regulation 82% user demographics 80% parental controls 78% company strategy 75% ai safety 72% content moderation 70%