Signal Founder Tackles AI’s Privacy Challenge

▼ Summary
– Signal’s founder is developing Confer, a fully encrypted, open-source AI chatbot designed to keep user conversations private.
– He criticizes current AI platforms like ChatGPT for feeling private while allowing companies to access and potentially use conversation data.
– The core argument is that a chatbot’s interface should match its actual privacy, as these models invite users to share intimate personal details.
– Confer encrypts prompts and responses on the user’s device and uses secure hardware environments to process data, preventing external access.
– The technology, similar to Signal’s, could potentially be adopted by major tech companies like Meta in the future.
The founder behind the popular encrypted messaging app Signal is now turning his attention to artificial intelligence, developing a new chatbot built from the ground up to protect user privacy. Moxie Marlinspike has introduced Confer, an open-source AI assistant that employs end-to-end encryption to ensure conversations remain completely confidential. This initiative addresses a growing concern that while current chatbots feel private, they often collect and potentially misuse sensitive user data.
Marlinspike has expressed strong reservations about the privacy standards of mainstream AI platforms. He draws a direct parallel to Signal’s core philosophy: an application’s interface should truthfully represent its underlying operations. Just as Signal provides a genuinely private channel for communication, an AI chatbot that presents itself as a safe space for personal thought should actually function as one. He points out that services like ChatGPT or Claude can give a false sense of security, as user inputs are frequently accessible to the companies behind them and may be used to train future models.
This issue is particularly pressing because large language models uniquely encourage personal disclosure. People naturally share their thought processes, uncertainties, and intimate details with these systems, creating a rich trove of psychological data. Marlinspike warns that this information could be exploited, potentially by advertisers seeking to manipulate behavior or by other entities in ways that harm the user. The very act of exploring ideas with an AI could, without proper safeguards, lead to those thoughts being used against the individual.
The proposed answer is Confer, designed to break this cycle. As Marlinspike describes it, the service aims to be a place where users can learn and brainstorm “without your own thoughts potentially conspiring against you someday.” The goal is to prevent a feedback loop where personal reflections become fodder for targeted advertising, which then shapes future thoughts. Instead, Confer seeks to let people engage with AI without simultaneously becoming a data point for brokers or training algorithms.
The technical approach mirrors Signal’s successful strategy. In Confer, all user conversations are encrypted on the individual’s own device before any data is transmitted. Prompts are sent to the service’s servers in this encrypted state and are only decrypted within a highly secure, isolated computing environment to generate a response. This process relies on modern security tools like passkeys, such as Face ID or a device PIN, to create encryption keys, moving beyond vulnerable traditional passwords.
A key component is the use of confidential computing. Here, the AI model runs inside a hardware-enforced Trusted Execution Environment (TEE). This specialized environment ensures that the host server providing the computational power cannot access the model’s memory or its active processes. The AI’s “inference” or reasoning happens in this confidential virtual machine. The resulting response is then encrypted again before being sent back to the user’s device.
To provide verifiable trust, the hardware generates a cryptographic proof called an attestation. This allows a user’s device to independently confirm that the system is operating correctly within the secure TEE, with no unauthorized access or tampering. The entire architecture is engineered to keep user data strictly private, ensuring it is never deposited into what Marlinspike critically refers to as “a data lake specifically designed for extracting meaning and context.”
Given that Signal’s encrypted protocol was widely adopted, including by Meta’s WhatsApp, there is precedent for Marlinspike’s privacy-focused innovations to influence the broader tech industry. It remains a possibility that major platforms could eventually integrate similar confidential AI technology to address growing user demand for genuine digital privacy.
(Source: Gizmodo)





