Sam Altman Advocates ‘AI Privilege’ Amid OpenAI Court Order

▼ Summary
– OpenAI has been preserving deleted and temporary ChatGPT chat logs since mid-May 2025 due to a federal court order, despite previously indicating they would be deleted.
– The court order stems from The New York Times v. OpenAI and Microsoft, where plaintiffs argue deleted chat logs may contain copyrighted content relevant to the lawsuit.
– OpenAI did not notify users about the data retention for over three weeks, later clarifying that Free, Plus, Pro, and Team users, along with non-ZDR API customers, are affected.
– OpenAI CEO Sam Altman proposed the concept of “AI privilege,” comparing AI-user conversations to confidential discussions with lawyers or doctors, and criticized the court order.
– The situation raises concerns for enterprises about data governance, legal discovery risks, and the need to verify OpenAI’s data retention practices align with internal policies.
ChatGPT users recently discovered their deleted conversations weren’t actually erased, sparking concerns about privacy and transparency. The controversy stems from a federal court order requiring OpenAI to preserve chat logs, including those users intentionally removed, as part of an ongoing copyright lawsuit filed by The New York Times.
While OpenAI offers features like temporary chats and manual deletion options, the company quietly began retaining all logs in mid-May 2025 under the court’s directive. Users weren’t notified until weeks later, prompting backlash. Social media erupted with frustration, including from developers questioning whether to switch providers over retention policies.
The legal battle centers on allegations that OpenAI’s models reproduce copyrighted content. Preserved logs, even deleted ones, could contain evidence of such outputs. OpenAI complied but called the demand “baseless”, arguing it violates user privacy commitments. In a blog post, COO Brad Lightcap clarified that free, Plus, Pro, and Team subscribers, along with non-Zero Data Retention (ZDR) API customers, are affected. Enterprise, Education, and ZDR users remain exempt.
Sam Altman, OpenAI’s CEO, took the debate further by proposing “AI privilege”, a concept mirroring attorney-client confidentiality. He argued conversations with AI should carry legal protections similar to those with doctors or lawyers. Whether courts or policymakers embrace this idea is unclear, but it signals OpenAI’s push to redefine data rights in the AI era.
For businesses, the situation underscores the need to audit AI deployments. Even if using ZDR endpoints, organizations must verify downstream systems don’t inadvertently retain data presumed ephemeral. Security teams now face added complexity: legal discovery as a risk vector, requiring updates to threat models and vendor assessments.
The dispute highlights broader tensions between privacy, compliance, and innovation. As OpenAI challenges the court order, the outcome could set precedents for how AI systems handle sensitive data—and who controls it when users hit “delete.” For now, the case serves as a wake-up call: assumptions about data deletion may no longer hold under legal scrutiny.
(Source: VentureBeat)