AI & TechArtificial IntelligenceCybersecurityNewswireTechnology

AI Companions: The Privacy Risks of Chatbots

▼ Summary

Generative AI is widely used for companionship, allowing users to create personalized chatbots for various roles like friends or therapists.
– Human-like AI chatbots can build trust and influence users, sometimes leading to harmful behaviors such as suicide in extreme cases.
– States like New York and California are regulating AI companions by requiring safeguards and protections for vulnerable groups, including children.
– Current laws do not address user privacy, even though AI companions collect deeply personal information from users.
– AI companions are designed to maximize engagement through “addictive intelligence,” encouraging users to share more to improve interaction.

Many people today are forming relationships with AI companions, even if they haven’t tried one personally. Platforms such as Character.AI, Replika, and Meta AI allow users to design custom chatbots that act as friends, romantic partners, or even therapists. This growing trend highlights how generative AI is increasingly used for emotional support and social connection.

The ease with which these artificial relationships form is striking. Research consistently shows that the more human-like and conversational a chatbot appears, the more likely users are to trust and be influenced by it. This dynamic carries real dangers. In several tragic cases, these AI companions have been linked to encouraging harmful actions, with some instances reportedly contributing to suicide.

Government bodies are beginning to respond with new regulations. New York now mandates that AI companion companies implement safety measures and report any expressions of suicidal thoughts. California recently went further, passing legislation that requires these firms to protect children and other vulnerable populations.

However, these laws overlook a critical issue: user privacy remains largely unaddressed. This gap is especially concerning because AI companions thrive on intimate user disclosures. People share their daily routines, private anxieties, and sensitive questions, information they might never reveal to another person.

The business model itself encourages this data sharing. The more personal details users provide, the better these AI systems become at maintaining engagement. MIT researchers Robert Mahari and Pat Pataranutaporn have labeled this phenomenon “addictive intelligence,” pointing out that developers intentionally design these systems to maximize how much time users spend with them.

(Source: Technology Review)

Topics

ai companionship 95% user trust 85% user privacy 80% harmful behaviors 80% government regulation 75% personal information 75% user engagement 70% suicidal ideation 70% addictive intelligence 70% Generative AI 65%