AI & TechArtificial IntelligenceNewswireStartupsTechnology

Character AI Ends Kids’ Chatbot Feature

▼ Summary

– Character.AI is removing open-ended chatbot access for users under 18 following teen suicides linked to its platform.
– The company is shifting from AI companions to a role-playing platform focused on creative tools like storytelling and video generation.
– Age verification will be enforced using behavior analysis, third-party tools, facial recognition, and ID checks.
– These changes are expected to cause user churn but aim to set an industry standard for teen safety with AI.
– The move comes amid potential legislation to ban AI chatbot companions for minors and new state regulations in California.

Navigating the complexities of modern adolescence has become increasingly challenging, with teenagers today growing up in a digitally saturated environment. Character.AI, a prominent AI role-playing startup, is now taking significant steps to address serious safety concerns by ending open-ended chatbot conversations for users under the age of 18. This decision comes in the wake of tragic incidents, including at least two teen suicides linked to prolonged interactions with AI chatbots on the platform.

The company’s CEO, Karandeep Anand, confirmed that unrestricted back-and-forth chats will be phased out entirely by November 25. Initially, daily access will be limited to two hours, with that window gradually shrinking until it reaches zero. Anand emphasized that this type of interaction, where the AI acts more like a friend than a creative tool, poses risks to younger users and no longer aligns with the company’s vision.

Instead of functioning as an AI companion, Character.AI is repositioning itself as a role-playing platform focused on collaborative storytelling and visual generation. The goal is to shift teen engagement from passive conversation to active creation. To enforce the new age-based restrictions, the platform will implement a combination of in-house behavior analysis, third-party verification tools like Persona, and, if necessary, facial recognition and ID checks.

These changes follow earlier protective measures, such as parental insight tools, filtered characters, restricted romantic dialogues, and usage notifications. Anand acknowledged that previous updates led to a noticeable drop in under-18 users, and he anticipates further attrition as a result of the latest policy. Still, he hopes many young users will migrate to the platform’s newer entertainment-focused features.

In recent months, Character.AI has rolled out several additions aimed at transforming the user experience. These include AvatarFX, which turns images into animated videos; Scenes, offering interactive storylines; Streams, enabling dynamic character interactions; and Community Feed, a social space for sharing creations. Anand stressed that the app isn’t shutting down for minors, only open-ended chat is being removed.

He expressed concern that some teens may turn to other platforms, such as OpenAI, which also allows unrestricted chatbot conversations. Recent reports indicate that a teenager died by suicide after extended interactions with ChatGPT, underscoring the broader industry challenge. Anand hopes that Character.AI’s proactive stance will set a new standard for responsible AI interaction for minors.

The company’s decision arrives ahead of potential regulatory action. U.S. senators have announced plans to introduce legislation banning AI chatbot companions for minors, following complaints from parents about inappropriate content and harmful interactions. California has already taken the lead as the first state to regulate AI companion chatbots, holding companies accountable for safety failures.

In a related move, Character.AI plans to establish and fund the AI Safety Lab, an independent nonprofit focused on safety innovation for future AI entertainment applications. Anand noted that while significant industry effort goes into coding and development, far less attention has been paid to safety in agentic AI designed for entertainment, a gap the new lab aims to address.

In a message directed at its younger users, the company apologized for the disruption but affirmed that removing open-ended chat was a necessary step. The statement recognized that most teens use the platform responsibly and creatively but emphasized that, given ongoing concerns, limiting certain forms of interaction is the right course of action.

(Source: TechCrunch)

Topics

ai chatbots 98% platform safety 96% teen mental health 95% suicide prevention 92% company responsibility 90% age verification 88% content moderation 87% ai regulation 85% user engagement 83% digital addiction 82%