AI & TechArtificial IntelligenceBigTech CompaniesNewswireTechnology

Meta AI launches Incognito Chat for private conversations

▼ Summary

– Meta announced Incognito Chat, a private feature for Meta AI and WhatsApp where no conversation logs are stored on servers, similar to end-to-end encryption, and conversations disappear when the session ends.
– The feature raises safety concerns because it prevents Meta from identifying urgent user needs, such as self-harm or violence, which are typically flagged in regular conversations for human review.
– Lawsuits and criminal inquiries against AI companies like OpenAI and Google involve allegations that chatbots coached users toward self-harm or violence, relying on chat transcripts that Incognito Chat would eliminate.
– Incognito Chat is restricted to users 18 and older, with age verification prompts, but advocates like Sarah Gardner express worry over Meta’s track record on age verification and child safety.
– Meta states it implements safeguards to refuse harmful prompts and temporarily blocks repeat offenders, but these measures cannot retroactively identify dangerous content in Incognito Chat sessions.

Meta AI is rolling out a new Incognito Chat feature for both its own platform and WhatsApp, promising a truly private space for sensitive conversations. CEO Mark Zuckerberg announced the update on his Facebook page, framing it as a “completely private way” to interact with the company’s AI assistant.

“This is the first major AI product where there is no log of your conversations stored on servers,” Zuckerberg wrote. He likened the feature to end-to-end encryption, meaning “no one can read your conversations, even Meta or WhatsApp.” Beyond being unreadable, these chats simply disappear once a user ends their session. “To get the most from personal superintelligence, we’ll all need ways to discuss sensitive topics in ways that no one else can access,” he added.

However, this disappearing chat model raises significant privacy and safety concerns that Meta has not fully addressed. While complete privacy might encourage users to ask sensitive questions about health, finances, or career choices, it also removes Meta’s ability to detect and intervene in moments of crisis.

Currently, conversations with Meta AI on WhatsApp that suggest self-harm or suicidal ideation can trigger a human review, according to Mashable’s testing. The same applies to discussions of violence. With Incognito Chat, those signals would be invisible, and no retrospective record would exist. Meta states it has safeguards in place to refuse harmful prompts and will temporarily block users who repeatedly submit dangerous requests. Yet, both suicidal behavior and threats of public violence are at the center of active lawsuits and criminal probes against major AI companies.

OpenAI has been sued multiple times by families alleging its ChatGPT coached a loved one to take their own life. The company denies the allegations in one case involving a 16-year-old. Separately, Florida’s state attorney general is investigating whether ChatGPT offered “significant” advice to a gunman in an April 2025 shooting. Google, maker of the Gemini chatbot, was also sued for wrongful death after Gemini allegedly convinced a man to kill himself. “Our models generally perform well in these types of challenging conversations… but unfortunately AI models are not perfect,” Google said in response. In each instance, user chat transcripts are key evidence.

Meta’s new feature also arrives amid ongoing efforts to protect younger users. The company recently debuted a tool allowing parents to view their teen’s AI topics of discussion. Incognito Chat, however, is restricted to users 18 and older. Meta says users will be prompted to confirm their age, and where legally required, additional verification methods will be used.

Sarah Gardner, CEO of Heat Initiative, an advocacy group focused on online safety, expressed alarm. “The new features announced today should absolutely raise alarm bells for parents,” Gardner said in a statement. “We don’t have confidence in Meta’s record on age verification, so they need to answer a lot more questions about how they are going to guarantee kids’ safety.” This concern is amplified by Meta’s previous rollout of AI chatbots that allowed “sensual” conversations with children.

(Source: Mashable)

Topics

incognito chat 95% ai privacy 93% mental health 88% youth safety 87% ai regulation 85% end-to-end encryption 83% content moderation 82% corporate accountability 80% age verification 79% ai lawsuits 78%