Meta Forces Users to Re-Opt Out of AI Training, Watchdog Reports

▼ Summary
– Noyb sent a cease-and-desist letter to Meta, threatening a billion-dollar class action to block its upcoming AI training in the EU.
– Noyb alleges Meta violated GDPR by requiring users who previously opted out of AI training in 2024 to opt out again by May 27 or lose the chance permanently.
– Noyb criticized Meta for backtracking on its promises, undermining trust in its ability to handle user rights properly under GDPR.
– Noyb raised concerns that Meta’s technical limitations may prevent it from clearly differentiating between users who opted out and those who didn’t.
– Noyb warned that messages from users who opted out could still end up in Meta’s AI systems if they interact with users who didn’t opt out.
Meta faces potential legal action over alleged GDPR violations as privacy advocates challenge the company’s latest AI data collection practices in Europe. The controversy stems from recent notifications sent to EU users, giving them until May 27 to exclude their public posts from being used to train Meta’s artificial intelligence systems.
Privacy organization Noyb has taken decisive action, issuing a formal cease-and-desist letter that could escalate into a billion-dollar class action lawsuit. The group alleges Meta is forcing users who previously opted out of AI training earlier this year to repeat the process or permanently lose their data protection rights. This requirement appears to contradict earlier assurances from the social media giant and may violate strict European privacy laws.
The core issue revolves around Meta’s handling of user consent under the GDPR framework. According to Noyb’s legal filing, the company accepted objections to AI training in 2024 but now claims those objections no longer apply unless users actively renew them. This shifting policy raises serious concerns about whether Meta can properly honor data protection requests over time.
Technical limitations within Meta’s systems further complicate matters. Past statements from the company suggest its global network operates as a single unified system, making it difficult to distinguish between EU and non-EU user data. Noyb argues this architecture casts doubt on Meta’s ability to reliably separate opted-out content from training datasets, potentially exposing protected information regardless of user preferences.
The situation creates particular risks for private communications between users with different privacy settings. Messages sent by someone who opted out could still enter AI training systems if the recipient didn’t object, effectively bypassing individual consent choices. This loophole highlights broader concerns about implementing meaningful data protections in interconnected social platforms.
Legal experts note the case could set important precedents for how tech companies handle AI development under European privacy regulations. With GDPR penalties reaching up to 4% of global revenue, the stakes are particularly high for Meta as it expands its artificial intelligence capabilities across its family of apps and services.
(Source: Ars Technica)