California Boosts Fake Nude Penalties to $250K to Protect Kids

▼ Summary
– California has enacted the first US law regulating companion bots, requiring platforms to create protocols for identifying and addressing users’ suicidal ideation or self-harm expressions.
– Companion bot platforms must publicly share statistics on crisis center notifications with the Department of Public Health and post them on their websites for transparency.
– The law bans companion bots from posing as therapists and mandates child safety measures like break reminders and blocking sexually explicit images for kids.
– Penalties for creating deepfake pornography are strengthened, allowing victims including minors to seek up to $250,000 in damages per deepfake from distributors.
– Both laws take effect on January 1, 2026, with the companion bot law intended to serve as a foundation for future AI regulation.
California is taking decisive action against two emerging threats to child safety: AI companion bots and the proliferation of deepfake pornography. The state has introduced groundbreaking legislation that imposes stricter regulations on companion bot platforms and significantly increases financial penalties for creating and distributing nonconsensual AI-generated explicit content. These measures aim to protect young people from psychological harm and cyber exploitation, setting a new standard for digital safety.
Governor Gavin Newsom recently signed the nation’s first law specifically targeting companion bots, a move prompted by tragic incidents involving teen suicides linked to such platforms. Under the new regulations, companies offering companion bot services, including widely used systems like ChatGPT, Grok, and Character.AI, must develop and publicly share detailed protocols for identifying and responding to users expressing suicidal thoughts or self-harm intentions. They are also required to report statistics on how often they direct users to crisis prevention resources to the California Department of Public Health, with this data made accessible on their websites. This transparency is intended to help parents and policymakers monitor concerning usage patterns and intervene when necessary.
The law also prohibits companion bots from presenting themselves as licensed therapists and mandates additional safeguards for younger users. Platforms must incorporate features such as regular break reminders for children and block access to sexually explicit imagery. These steps are designed to create a safer digital environment and reduce risks associated with prolonged or inappropriate interactions with AI systems.
In a parallel effort, California has substantially raised the stakes for those involved in producing or sharing deepfake pornography. Victims, including minors, can now seek damages of up to $250,000 per violation from any individual or entity that knowingly distributes AI-generated sexually explicit material without consent. This marks a dramatic increase from previous caps, which allowed for statutory damages ranging from $1,500 to $30,000, or up to $150,000 in cases involving malicious intent. The enhanced penalty structure reflects growing concern over the use of AI tools to target young people with fabricated nude images, a form of cyber bullying with severe emotional consequences.
Both laws are scheduled to take effect on January 1, 2026. According to Democratic Senator Steve Padilla, who championed the companion bot legislation, these regulations establish essential protections for families navigating the challenges posed by advancing technology. He emphasized that the law provides a foundational framework for future oversight as AI continues to evolve, acknowledging that American families are engaged in an ongoing struggle to safeguard their children in an increasingly digital world.
(Source: Ars Technica)




