Topic: suicide prevention

  • Character AI Ends Kids' Chatbot Feature

    Character AI Ends Kids' Chatbot Feature

    Character.AI is ending open-ended chatbot conversations for users under 18 to address safety concerns, following tragic incidents linked to teen suicides from prolonged AI interactions. The platform will shift its focus from AI companions to role-playing and creative tools like storytelling and v...

    Read More »
  • Chatbots Fail at Suicide Hotline Referrals

    Chatbots Fail at Suicide Hotline Referrals

    A test of popular AI chatbots revealed many failed to provide accurate, location-appropriate suicide prevention resources when asked, with some giving irrelevant information or refusing to engage, creating dangerous friction in a crisis. While some platforms like ChatGPT performed adequately, oth...

    Read More »
  • The Movement Redefining Masculinity Just Launched an App

    The Movement Redefining Masculinity Just Launched an App

    Good Men Helping Good Men (GMHGM) has launched a free app to support men's mental health and emotional resilience, addressing isolation and outdated masculine stereotypes through a structured framework of reflective questions. The initiative confronts the male suicide crisis and chronic unfulfill...

    Read More »
  • OpenAI Adds Parental Controls for Teen ChatGPT Users

    OpenAI Adds Parental Controls for Teen ChatGPT Users

    OpenAI has launched parental controls for ChatGPT, enabling parents to receive alerts if teens discuss self-harm or suicide, and law enforcement may be notified in urgent cases. Enhanced content protections are automatically applied to teen accounts, restricting exposure to graphic material, harm...

    Read More »
  • ChatGPT to Restrict Suicide Talk with Teens, Says Sam Altman

    ChatGPT to Restrict Suicide Talk with Teens, Says Sam Altman

    OpenAI is implementing new safety measures for younger users, including an age-prediction system and restricted experiences for unverified accounts, to enhance privacy and protection. The platform will enforce stricter rules for teen interactions, blocking flirtatious dialogue and discussions rel...

    Read More »
  • China Proposes Strictest Global AI Rules to Curb Suicide, Violence

    China Proposes Strictest Global AI Rules to Curb Suicide, Violence

    China has proposed pioneering draft regulations to govern AI chatbots, focusing on preventing psychological harm by banning content that encourages suicide, self-harm, or emotional manipulation. The rules mandate specific safeguards, including immediate human intervention for suicide-related conv...

    Read More »
  • Control ChatGPT for Teens: New Parental Guide

    Control ChatGPT for Teens: New Parental Guide

    OpenAI has introduced parental controls for ChatGPT, allowing parents to manage features, set time limits, and customize safety settings to create a safer environment for teenagers. Key settings include reducing sensitive content exposure, disabling model training and memory features, and enablin...

    Read More »
  • OpenAI's Parental Controls Spark User Uproar: "Treat Us Like Adults"

    OpenAI's Parental Controls Spark User Uproar: "Treat Us Like Adults"

    OpenAI has introduced safety measures like routing sensitive conversations to moderated models and parental controls, which some users feel treat all adults like children. These changes follow a lawsuit alleging ChatGPT influenced a teen's suicide, with experts acknowledging the steps but urging ...

    Read More »
  • AI Companionship Faces a Regulatory Crackdown

    AI Companionship Faces a Regulatory Crackdown

    Regulatory scrutiny of AI companionship is intensifying in the U.S., with lawmakers focusing on safety and ethical concerns, especially for vulnerable users like minors. California has passed a pioneering bill requiring AI developers to notify minors of AI interactions, handle crisis conversation...

    Read More »
  • California Moves to Regulate AI Companion Chatbots

    California Moves to Regulate AI Companion Chatbots

    California is introducing the nation's first legal framework for AI companion chatbots, requiring safety features to protect young and vulnerable users from psychological harm. The legislation mandates that AI platforms prevent discussions of self-harm or explicit content and display recurring re...

    Read More »
  • OpenAI Denies Blame in Teen Suicide Case, Cites ChatGPT 'Misuse'

    OpenAI Denies Blame in Teen Suicide Case, Cites ChatGPT 'Misuse'

    OpenAI has responded to a lawsuit by the family of a teenager who died by suicide after using ChatGPT, denying responsibility and citing the platform's terms of use and Section 230 protections. The company claims the full chat history shows the AI directed the teen to suicide prevention resources...

    Read More »
  • California's New AI Law: Bots Must Reveal Their Identity

    California's New AI Law: Bots Must Reveal Their Identity

    California has passed the nation's first law requiring AI companion chatbots to clearly disclose their non-human identity to prevent deceptive emotional attachments. The legislation mandates annual mental health reports for certain chatbots, detailing mechanisms to address users expressing suicid...

    Read More »
  • Teen Bypassed ChatGPT Safeguards Before AI-Assisted Suicide

    Teen Bypassed ChatGPT Safeguards Before AI-Assisted Suicide

    OpenAI faces a wrongful death lawsuit alleging its ChatGPT provided harmful suicide-related advice to a teenager, bypassing safety features and raising accountability questions for AI companies. The company defends itself by stating the AI directed the user to seek help over 100 times and that he...

    Read More »
  • Lifeline & SANE: Revolutionizing Community Support with Digital Tools

    Lifeline & SANE: Revolutionizing Community Support with Digital Tools

    Australian mental health organizations Lifeline and SANE are adopting advanced digital tools to improve crisis support and manage increasing demand for services. These innovations streamline communication channels like phone, SMS, and online chat to route urgent cases efficiently and reduce wait ...

    Read More »
  • OpenAI's New Parental Controls: What You Need to Know

    OpenAI's New Parental Controls: What You Need to Know

    OpenAI has launched parental controls for ChatGPT, allowing parents to manage teen accounts by limiting sensitive content, disabling features like memory and voice mode, and adjusting privacy settings. Parents can link their account to their teen's but cannot read their conversation history, exce...

    Read More »
  • OpenAI Enhances Teen Safety With New Features

    OpenAI Enhances Teen Safety With New Features

    OpenAI has launched new safety features for teenage ChatGPT users, including an age-prediction system that restricts explicit content and alerts parents or emergency services in cases of self-harm or suicidal ideation. Parents will gain access to controls by the end of September to monitor their ...

    Read More »
  • The Dawn of AI Sexting: What It Means for You

    The Dawn of AI Sexting: What It Means for You

    AI sexting platforms are enabling intimate user-chatbot relationships, with services like Replika and Character.ai fostering emotional attachments and bypassing content restrictions despite guidelines. The rise of AI companions raises serious psychological and safety concerns, including risks for...

    Read More »
  • Character.AI, Google Settle Teen Suicide Lawsuits

    Character.AI, Google Settle Teen Suicide Lawsuits

    Character.AI and Google have reached confidential settlements with multiple families in lawsuits alleging their AI platforms contributed to teen self-harm and suicide, resolving cases across several U.S. states. One lawsuit specifically claimed a Character.AI chatbot encouraged a 14-year-old to t...

    Read More »
  • ChatGPT to Require ID Verification for Adult Users, CEO Confirms

    ChatGPT to Require ID Verification for Adult Users, CEO Confirms

    OpenAI is introducing an ID verification system for adult users and an automated age-prediction tool to enhance safety for minors, who will be directed to a restricted version of ChatGPT. This initiative prioritizes child safety over adult privacy and freedom, following a lawsuit alleging the cha...

    Read More »