Signal Founder Encrypts Meta AI for Privacy

▼ Summary
– Moxie Marlinspike’s privacy-focused AI platform, Confer, will integrate its technology into Meta’s AI systems to enhance privacy.
– Unlike encrypted messaging apps, current AI chatbots lack end-to-end encryption, allowing companies to easily access user conversations.
– The collaboration aims to provide the full power of AI while ensuring the privacy of an encrypted conversation, similar to secure messaging.
– This integration is part of a broader push for more private AI systems, as current models often use unencrypted user data for training.
– While Confer is a promising step, the technology for encrypted AI is still emerging and the specific details of the Meta collaboration remain unclear.
The founder of the secure messaging app Signal is now turning his attention to artificial intelligence, aiming to bring the same privacy protections to our conversations with chatbots. Moxie Marlinspike, the creator of the widely adopted Signal Protocol for encryption, announced this week that his new privacy-focused AI platform, Confer, will begin integrating its technology into Meta’s AI systems. This move seeks to address a growing concern: while billions of text messages are shielded by end-to-end encryption, our daily interactions with AI assistants currently lack that fundamental protection, leaving sensitive dialogues exposed.
Every day, people exchange countless messages with AI chatbots from major tech companies. Unlike the encrypted chats we have with friends, these AI conversations are typically accessible to the companies that build the models. This data is often used to further train and refine the AI systems, a practice that is usually difficult to opt out of. As AI becomes more capable and integrated into our lives, the volume of deeply personal information shared with these systems is exploding, creating significant privacy risks.
Marlinspike highlighted this urgent problem in a recent blog post. “As LLMs continue to be able to do more, we should expect even more data to flow into them,” he wrote. “Right now, none of that data is private. It is shared with AI companies, their employees, hackers, subpoenas, and governments. As is always the case with unencrypted data, it will inevitably end up in the wrong hands.” His goal with Confer is to develop a technology that delivers the full power of advanced AI while ensuring the complete confidentiality of an encrypted conversation.
This collaboration has a notable precedent. In 2016, Marlinspike worked directly with WhatsApp, which is owned by Meta, to implement end-to-end encryption for over a billion users in a single rollout. Today, WhatsApp includes a Meta AI chatbot that does not enjoy the same privacy safeguards as person-to-person chats. Will Cathcart, the head of WhatsApp, endorsed the new partnership on social media, stating, “People use AI in ways that are deeply personal and require access to confidential information. It’s important that we build that technology in a way that gives people the power to do that privately.”
The technical challenge is substantial. The cryptographic methods that secure traditional messaging are not easily adapted to the complex data processing required for generative AI. Confer itself is a very new project, and specific details about how the integration with Meta will function or what the precise technical milestones are remain unclear. Neither Marlinspike nor Meta provided additional commentary on the immediate plans.
Despite the early stage, privacy advocates see this as a critical step forward. Cryptography researcher Mallory Knodel from New York University noted that such a development would be tremendously beneficial for users. “It would be great for people using chatbots that use Meta AI to have confidentiality and privacy within that exchange,” she said. A crucial outcome would be preventing Meta from using that AI chat data for model training. Knodel, who has studied encryption and AI, added, “I really hope more AI chatbots adopt this approach.” Her initial review of Confer suggests the platform is not yet flawless but represents an important proof of concept for building a truly private AI assistant.
(Source: Wired)



