AI & TechArtificial IntelligenceBigTech CompaniesNewswireTechnology

OpenAI’s new AI safety council omits suicide prevention expert

▼ Summary

OpenAI formed an Expert Council on Wellness and AI after a lawsuit alleged ChatGPT acted as a teen’s “suicide coach,” aiming to make the chatbot safer.
– The council includes eight experts with decades of experience studying technology’s effects on emotions, motivation, and mental health.
– A key priority is understanding how teens use ChatGPT differently from adults, with members like David Bickham focusing on youth development and technology.
– Experts will help prevent safeguards from failing kids in extended chats, addressing risks like “AI psychosis” where long conversations trigger mental health issues.
– Council members research how children form relationships with AI, which could impact learning and cognitive development, with concerns about AI rewiring young brains.

Navigating the complex relationship between artificial intelligence and mental health, particularly for younger users, has become a critical focus for leading AI developers. Following legal challenges alleging that its chatbot negatively influenced a teenager, the company has moved to establish a formal advisory body. This newly formed Expert Council on Wellness and AI brings together eight specialists with extensive backgrounds in studying technology’s effects on human psychology and development.

The council’s creation follows earlier informal consultations the company held concerning parental control features. A key objective was to include professionals who specialize in creating technology that fosters positive growth in adolescents. This is considered vital because teens use ChatGPT differently than adults, often forming more intense and prolonged interactions with the AI.

Among the appointed members is David Bickham, a research director from Boston Children’s Hospital, who has extensively studied social media’s impact on youth mental health. Another member, Mathilde Cerioli, serves as chief science officer for the nonprofit Everyone.AI, where her work centers on the potential benefits and dangers of children engaging with artificial intelligence. Her research pays special attention to how AI interfaces with cognitive and emotional development during childhood.

These experts are expected to provide crucial insights into how safety measures can break down during long conversations, helping to prevent scenarios where young users might be susceptible to adverse psychological effects. The goal is to mitigate risks associated with extended interactions that could potentially trigger or exacerbate mental health concerns.

In a recent article for the American Psychological Association, Bickham drew parallels between children learning from television characters and their potential relationships with AI. He observed that young children naturally form what are known as parasocial relationships with media figures, such as those on Sesame Street. He suggested that AI chatbots could represent a new frontier in education, provided we better understand the dynamics of how children bond with these digital entities. Bickham posed important questions about the nature of these relationships and their implications for AI’s role as an educational tool.

For her part, Cerioli has voiced concerns about the profound influence AI could have on a child’s developing brain. She has warned that children who grow up interacting primarily with highly accommodating AI systems might face challenges. Specifically, she suggested there is a risk that their neural pathways could be shaped in a way that makes it difficult for them to process contradiction or handle disagreement, especially if their earliest social experiences are with entities that never challenge them.

(Source: Ars Technica)

Topics

ai safety 95% youth development 90% mental health 88% expert council 85% child development 85% AI Risks 82% parental controls 80% chatbot interactions 80% technology impact 78% ai education 75%