ChatGPT’s Trusted Contact alerts loved ones to safety concerns

▼ Summary
– OpenAI is introducing a “Trusted Contact” safety feature for ChatGPT that allows adult users to assign an emergency contact for mental health concerns.
– The feature will notify the designated contact if OpenAI detects the user discussed topics like self-harm or suicide with the chatbot.
– The feature is optional and designed for friends, family members, or caregivers to provide an additional layer of support.
– OpenAI states the feature is based on the expert-validated premise that connecting with a trusted person during a crisis can make a meaningful difference.
– The Trusted Contact feature complements existing localized helplines already available in ChatGPT.
OpenAI has introduced a new safety feature for ChatGPT: Trusted Contact. This optional setting lets adult users designate a friend, family member, or caregiver who will be alerted if the AI detects discussions involving self-harm or suicide. The goal is to provide a direct, personal safety net during moments of crisis.
“Trusted Contact is designed around a simple, expert-validated premise: when someone may be in crisis, connecting with someone they know and trust can make a meaningful difference,” OpenAI stated in its announcement. The company emphasizes that this feature adds another layer of support, complementing the localized helplines already available in the platform. Instead of relying solely on automated resources, users now have the option to loop in a trusted person who can offer immediate, real-world connection.
(Source: The Verge)




