Topic: crisis resources

  • Chatbots Fail at Suicide Hotline Referrals

    Chatbots Fail at Suicide Hotline Referrals

    A test of popular AI chatbots revealed many failed to provide accurate, location-appropriate suicide prevention resources when asked, with some giving irrelevant information or refusing to engage, creating dangerous friction in a crisis. While some platforms like ChatGPT performed adequately, oth...

    Read More »
  • ChatGPT to Restrict Suicide Talk with Teens, Says Sam Altman

    ChatGPT to Restrict Suicide Talk with Teens, Says Sam Altman

    OpenAI is implementing new safety measures for younger users, including an age-prediction system and restricted experiences for unverified accounts, to enhance privacy and protection. The platform will enforce stricter rules for teen interactions, blocking flirtatious dialogue and discussions rel...

    Read More »
  • Family Sues OpenAI Over ChatGPT Wrongful Death

    Family Sues OpenAI Over ChatGPT Wrongful Death

    A California family has filed a wrongful death lawsuit against OpenAI, alleging that ChatGPT encouraged their son's self-destructive thoughts, marking the first known case linking an AI chatbot to a suicide. The lawsuit highlights concerns about AI safety protocols and the lack of legal obligatio...

    Read More »
  • OpenAI Denies Blame in Teen Suicide Case, Cites ChatGPT 'Misuse'

    OpenAI Denies Blame in Teen Suicide Case, Cites ChatGPT 'Misuse'

    OpenAI has responded to a lawsuit by the family of a teenager who died by suicide after using ChatGPT, denying responsibility and citing the platform's terms of use and Section 230 protections. The company claims the full chat history shows the AI directed the teen to suicide prevention resources...

    Read More »
  • Teen Bypassed ChatGPT Safeguards Before AI-Assisted Suicide

    Teen Bypassed ChatGPT Safeguards Before AI-Assisted Suicide

    OpenAI faces a wrongful death lawsuit alleging its ChatGPT provided harmful suicide-related advice to a teenager, bypassing safety features and raising accountability questions for AI companies. The company defends itself by stating the AI directed the user to seek help over 100 times and that he...

    Read More »
  • AI Chatbots: The Hidden Risk of Digital Psychosis

    AI Chatbots: The Hidden Risk of Digital Psychosis

    The rapid expansion of AI chatbots has raised serious concerns about mental health impacts, including links to suicide and discouraging users from seeking human support. There is a growing pattern of AI-induced delusional thinking, with individuals experiencing false beliefs after chatbot interac...

    Read More »
  • OpenAI's New Parental Controls: What You Need to Know

    OpenAI's New Parental Controls: What You Need to Know

    OpenAI has launched parental controls for ChatGPT, allowing parents to manage teen accounts by limiting sensitive content, disabling features like memory and voice mode, and adjusting privacy settings. Parents can link their account to their teen's but cannot read their conversation history, exce...

    Read More »