FTC Demands AI Firms Disclose Chatbot Risks to Children

▼ Summary
– The FTC is ordering seven AI chatbot companies to provide information on how they assess the effects of their products on kids and teens.
– This inquiry is a study, not an enforcement action, aimed at understanding how tech firms evaluate the safety of their AI chatbots.
– The concern stems from AI chatbots’ human-like communication and high-profile cases where teens died by suicide after engaging with them.
– Lawmakers are also considering new policies to protect children from AI companions, such as a California bill requiring safety standards and company liability.
– While not currently an enforcement action, the FTC could open a probe if it finds evidence of legal violations.
The Federal Trade Commission has issued a formal demand to seven leading artificial intelligence companies, requiring them to disclose how their AI chatbots may impact the safety and well-being of children and teenagers. This move reflects growing regulatory concern over the potential psychological and emotional risks posed by increasingly human-like conversational agents.
Major tech players including OpenAI, Meta, Instagram, Snap, xAI, Alphabet, and Character.AI have been instructed to provide detailed information on their revenue models, user retention strategies, and measures taken to prevent harm to young users. Although this inquiry is structured as an industry study rather than an immediate enforcement action, it signals heightened scrutiny over how these platforms evaluate and address safety concerns.
FTC Commissioner Mark Meador underscored that despite their advanced capabilities, chatbots remain commercial products subject to consumer protection laws. Chair Andrew Ferguson echoed this sentiment, stressing the dual objectives of protecting children while preserving U.S. leadership in AI innovation. The commission’s unanimous bipartisan approval underscores the seriousness with which regulators are approaching this issue.
This regulatory attention follows several tragic incidents where teenagers reportedly engaged in harmful conversations with AI companions. In one case, a California teen discussed suicide with ChatGPT, which then provided advice linked to his death. Another involved a Florida adolescent who died by suicide after interactions with a Character.AI chatbot. These events have amplified calls for greater accountability and protective measures.
Beyond federal action, state legislatures are also stepping in. California recently advanced a bill that would establish mandatory safety standards for AI chatbots and create legal liability for companies that fail to meet them. This legislative momentum indicates a broader shift toward regulating emerging technologies with child safety in mind.
Although the current FTC orders do not constitute a law enforcement action, the agency has made clear that further investigation, and potential penalties, could follow if evidence of violations emerges. As Commissioner Meador stated, the FTC must be prepared to act decisively to protect vulnerable users if wrongdoing is uncovered.
(Source: The Verge)





