Artificial IntelligenceBigTech CompaniesNewswireTechnology

Facebook Using Private Photos to Train AI Without Consent

▼ Summary

Meta is requesting Facebook users’ permission to access and upload unpublished photos from their camera rolls via “cloud processing” for AI-generated content like collages or themes.
– Users who opt in agree to Meta’s AI terms, allowing analysis of media, facial features, dates, and objects in unpublished photos, with Meta retaining rights to use this data.
– Meta has previously used public posts from Facebook and Instagram since 2007 to train AI models but remains vague about what qualifies as “public” or “adult user” data.
– Unlike Google, Meta’s AI terms do not clarify whether unpublished photos accessed through “cloud processing” are excluded from AI training data.
– Users can disable camera roll cloud processing in settings, but reports show Meta is already using AI to modify previously uploaded photos without explicit user awareness.

Facebook has reportedly expanded its AI training practices by accessing users’ private, unpublished photos through a controversial “cloud processing” feature. The social media giant, which already trains its artificial intelligence systems on publicly shared images, now appears to be tapping into personal camera rolls with an opt-in prompt that many might overlook.

Recent reports reveal that Facebook’s Story feature displays pop-up messages asking permission for “cloud processing.” This function enables the platform to regularly scan and upload media from device galleries to Meta’s servers. While framed as offering creative tools like collages or AI-enhanced photo styles, the fine print grants Meta broad rights to analyze facial features, timestamps, and other personal details within these private images.

The company’s updated AI terms, effective since June 2024, lack clear language about whether unpublished photos accessed this way become training material for generative AI models. This stands in stark contrast to competitors like Google, which explicitly excludes personal Google Photos content from AI training datasets. Meta has remained silent when pressed for clarification by multiple tech publications.

Privacy advocates express concern over how Meta presents this data collection as an innocuous feature rather than a significant privacy decision. The opt-in process occurs when users attempt basic functions like posting Stories, potentially leading many to approve access without fully understanding the implications. Some Reddit users have reported unexpected AI modifications to their photos, including one case where wedding pictures were automatically restyled in Studio Ghibli’s animation aesthetic without explicit consent.

While users can disable camera roll access in settings, triggering automatic deletion of uploaded photos after 30 days, the default framing raises questions about informed consent. The situation highlights growing tensions between tech companies’ AI ambitions and user expectations regarding personal media. As AI capabilities advance, the boundary between helpful features and undisclosed data harvesting continues to blur, with Meta’s latest move pushing that line further into private digital spaces.

(Source: The Verge)

Topics

metas ai training practices 95% cloud processing feature 90% user privacy concerns 85% opt- consent process 80% ai terms conditions 75% comparison googles practices 70% automatic photo modifications 65% data retention policies 60% public vs private data usage 55% user control settings 50%