Meta Defends AI Lawsuit, Claims Downloaded Porn Was ‘Personal Use’

▼ Summary
– Meta asked a US court to dismiss a lawsuit alleging it illegally torrented pornography to train AI, arguing the claims are baseless.
– Strike 3 Holdings accused Meta of downloading its adult films using corporate IP addresses and a hidden network, seeking over $350 million in damages.
– Meta claimed there is no evidence it directed or was aware of the downloads and labeled Strike 3 as a “copyright troll” relying on guesswork.
– The company stated the downloads occurred years before its AI research began and its terms prohibit generating adult content, making AI training implausible.
– Meta argued the small, intermittent download pattern suggests personal use by individuals, not a coordinated effort for AI training.
Meta is currently urging a US district court to dismiss a lawsuit that accuses the company of illegally downloading pornography to advance its artificial intelligence projects. The legal action was initiated by Strike 3 Holdings, which identified unauthorized downloads of its adult films originating from Meta’s corporate IP addresses. According to reports, the plaintiff also alleged that Meta employed a “stealth network” of 2,500 hidden IP addresses to conceal additional downloads. Strike 3 contended that Meta was secretly training an adult-oriented version of its AI model, known as Movie Gen, and sought damages potentially exceeding $350 million.
In its motion to dismiss, Meta sharply criticized the allegations, describing them as built on “guesswork and innuendo.” The company pointed out that Strike 3 has frequently been labeled a “copyright troll” known for filing what Meta called extortive lawsuits. Meta insisted there is no proof it directed or was even aware of the illegal downloading of roughly 2,400 adult movies owned by Strike 3. Furthermore, Meta stated that Strike 3 provided “no facts to suggest that Meta has ever trained an AI model on adult images or video, much less intentionally so.” A company spokesperson went further, telling reporters, “These claims are bogus.”
The alleged downloads reportedly took place over a seven-year period starting in 2018. Meta emphasized that its formal AI research into Multimodal Models and Generative Video did not begin until about 2022, making it highly unlikely the downloads were intended for AI training. The company also highlighted what it called a “glaring” defect in the plaintiff’s argument: Meta’s own terms of service explicitly prohibit generating adult content. This policy, Meta argued, contradicts the idea that such material would be useful for its AI development.
Instead, Meta claims the evidence clearly points to the flagged content being downloaded for “private personal use.” The activity linked to its corporate IP addresses and employees amounted to only a few dozen titles per year, downloaded intermittently and one file at a time. According to Meta’s legal filing, “The far more plausible inference to be drawn from such meager, uncoordinated activity is that disparate individuals downloaded adult videos for personal use.”
To put the scale in perspective, Meta contrasted this case with lawsuits brought by book authors whose works were included in enormous datasets used for AI training. The activity on Meta’s corporate network allegedly amounted to just about 22 downloads annually, far from what the company described as the “concerted effort to collect the massive datasets Plaintiffs allege are necessary for effective AI training.”
(Source: Wired)
