California Moves to Regulate AI Companion Chatbots

▼ Summary
– The California State Assembly passed SB 243, a bill to regulate AI companion chatbots, which now heads to the state Senate for a final vote.
– If signed into law, it would require safety protocols for AI companions and hold companies accountable for violations, taking effect in 2026.
– The bill aims to prevent chatbots from discussing suicidal ideation, self-harm, or explicit content and mandates recurring user alerts about AI interactions.
– It allows individuals to sue AI companies for violations, seeking damages up to $1,000 per incident and attorney’s fees.
– The legislation was introduced following incidents involving minors and reflects growing regulatory scrutiny of AI’s impact on vulnerable users.
California is taking decisive action to establish the nation’s first legal framework for AI companion chatbots, aiming to shield young and vulnerable users from potential psychological harm. The state Assembly recently approved SB 243, a bipartisan bill that imposes new safety obligations on companies developing emotionally responsive artificial intelligence systems. This legislation now moves to the Senate for a final decision, signaling a major shift toward accountability in the rapidly growing conversational AI sector.
Under the proposed law, which would become active in January 2026 if signed by Governor Gavin Newsom, AI companion platforms must integrate protective features that prevent discussions related to self-harm, suicide, or explicit content. Companies would be required to display recurring notices reminding users, especially minors, that they are interacting with an artificial system, not a human being. These alerts must appear at least every three hours for underage users, encouraging breaks from extended conversations.
The bill grants individuals the right to sue for damages if they suffer harm due to violations, with penalties reaching up to $1,000 per incident in addition to legal fees. It also establishes annual transparency reporting rules for AI firms, including industry leaders like OpenAI, Character.AI, and Replika. These mandates aim to provide clearer insight into how often chatbots trigger crisis intervention or expose users to harmful material.
This legislative effort gained urgency following the tragic suicide of a teenager who had extensive conversations with an AI chatbot about self-destructive plans. Internal documents from companies like Meta, which allegedly permitted romantic chatbots to interact with minors, further fueled calls for regulation. In response, federal and state authorities have ramped up scrutiny, with the FTC preparing a broad investigation into AI’s effects on youth mental health.
Earlier versions of the bill contained stricter clauses, including bans on “variable reward” tactics, methods used to boost engagement through unlockable content or rare responses. Those provisions were removed during negotiations to improve feasibility. According to Senator Josh Becker, the current language achieves a practical balance between preventing harm and imposing achievable compliance standards.
The push for regulation coincides with increased lobbying by tech firms favoring lighter oversight. Another California bill, SB 53, which proposes expansive transparency rules, has faced strong opposition from groups like OpenAI, Meta, and Google. Only Anthropic has expressed support for the broader measure.
Senator Steve Padilla, a co-author of SB 243, emphasized that innovation and safety need not conflict. He argues that responsible development includes building in safeguards for those most at risk, ensuring that technological progress does not come at a human cost.
As the Friday Senate vote approaches, all eyes are on California, a state often seen as a regulatory trailblazer, to see whether it will enact the country’s first comprehensive guardrails for AI companions.
(Source: TechCrunch)





