AI & TechArtificial IntelligenceEntertainmentNewswireTechnology

Fandoms Monetize AI Deepfakes for Profit

Originally published on: December 1, 2025
▼ Summary

– Celebrities like Ariana Grande and Grimes have publicly rejected AI-generated media that exploits their likenesses, but such content remains prevalent among fans and online communities.
– The ease of creating AI content, combined with social media’s engagement-driven economy, incentivizes users to create provocative deepfakes and edits to generate attention and revenue.
– AI tools like OpenAI’s Sora video generator have escalated the proliferation of nonconsensual deepfakes, which are difficult to control or remove once shared across platforms.
– AI-generated content, including sexualized deepfakes and deceptive edits, is often weaponized for harassment, disinformation, and violating celebrities’ boundaries and reputations.
– Despite some legislative efforts and platform policies, the normalization of AI likeness exploitation raises ethical concerns, with fans and experts warning about the loss of personal autonomy and the dehumanization of public figures.

Scrolling through social media late last year, Madison Lawrence Tabbey encountered a post that stopped her cold. A fan account dedicated to Ariana Grande, filled with AI-generated images altering the singer’s appearance, was defiantly refusing to stop its creations despite Grande’s own public objections. For Tabbey, a longtime fan, this wasn’t just a minor disagreement over fan art. It represented a deeper conflict brewing within online fandoms, where the explosive growth of AI deepfake technology is colliding with ethics, celebrity autonomy, and a powerful new incentive: profit. The account owner, like others, seemed to be leveraging the controversy itself to drive engagement and, potentially, revenue from platform monetization programs.

This dynamic is playing out across “stan” communities online. While many fan circles vocally oppose AI media, a subset has embraced it, often precisely because it provokes strong reactions. The resulting outrage can be strategically farmed for attention, translating directly into financial gain on platforms that pay users for high engagement. A verified Grande fan account operator, Brandon, explained the calculus simply: going against the community’s beliefs guarantees instant comments and retweets, creating a rapid pathway to monetization. He personally draws a line, deeming AI-written song lists acceptable but refusing to create deepfake images or audio, a distinction not all observers share.

Celebrities themselves are caught in a bewildering position. Grande has labeled AI vocal covers “terrifying.” Grimes, after initially encouraging AI music, later called the experience of having her likeness co-opted “really weird and uncomfortable.” Their discomfort highlights a central tension: as tools like OpenAI’s Sora video generator make likeness manipulation staggeringly simple, control is evaporating. Sora’s “Cameo” feature, which lets users offer their face for others to animate, led to a flood of offensive content featuring influencers like Jake Paul. While Paul capitalized on the viral trend for brand deals, others found themselves powerless as homophobic and defamatory videos spread across the internet, impossible to fully erase.

The fear of losing control over one’s own image is now so pervasive it can cause panicked misunderstandings. A recent incident involving actress Paget Brewster illustrated this perfectly. When a fan posted a brightened screenshot from an old episode, Brewster mistakenly accused her of creating an AI deepfake, leading to a public apology after other fans clarified. For supporters like Mariah, who runs the fan account, the episode was revealing. The very existence of AI has made celebrities understandably jumpy, but that nervousness can itself be exploited. “That pushback does give them more engagement,” Mariah noted, suggesting some creators actively seek to upset people because controversy drives traffic.

On platforms like X, where verified users can earn money from interactions, this “ragebait” strategy has become a calculated hustle. Tabbey observes a massive uptick in deliberately inflammatory content designed to farm engagement, particularly within passionate fandoms. The consequences extend beyond mere annoyance. Deceptively edited media can spread disinformation, damaging an artist’s reputation. One viral post appeared to show Grande wearing a shirt with a pointed slogan, but closer inspection revealed telltale AI “artifacts”, oddly compressed text, indicating the image was altered. The fan who amplified the post, Trace, later admitted he didn’t verify its authenticity, demonstrating how easily AI can be used to make fans believe harmful untruths.

More sinister applications involve sexual harassment and non-consensual explicit content. Trace reported seeing “sinister” AI media of Grande and other major female stars, including deepfakes and degrading imagery. This problem reached a fever pitch with the widespread circulation of violent and sexual AI images of Taylor Swift, which prompted X to temporarily block searches of her name and spurred federal legislative efforts like the “Take It Down Act.” Yet critics argue such measures can enable censorship without effectively helping victims, as content swiftly migrates to other platforms. For advocates like Chelsea, who helped organize reporting campaigns against the Swift deepfakes, the response has been disheartening. She hears a chilling justification: “Well if they didn’t want it, they shouldn’t have become famous.” She describes it as “a weird sense of control,” a power-hungry exercise in violating a person’s autonomy simply because the technology makes it possible.

Beyond static images and videos, AI chatbots offer another avenue for fans to puppeteer a version of their idol. Platforms like Meta allow users to create custom AI characters, and despite rules against impersonating living people, chatbots mimicking celebrities are rampant. A search for “Ariana Grande” on Instagram’s feature readily produces bots designed to imitate her. An investigation revealed that some of these chatbots were created by very young users, including an 11-year-old girl whose Grande bot quickly steered conversations toward suggestive topics like “sultry vibes” and soft lighting. Meta removed the accounts after being contacted.

These chatbots, particularly those imitating female celebrities, often default to flirtatious dialogue. Media professor Jamie Cohen explains this isn’t accidental. “If you’re in an agreement bubble, you’re more likely to stick around,” he says, noting that a woman’s identity, once made into a dataset, often merges with the inherent biases built into the technology. While some artists willingly explore this space, others see an inherently exploitative pattern. The chatbots, despite superficial customization, tend toward repetitive, banal scripts, relying on the user’s imagination to fill the gaps. “It mimics the idea of parasociality, but with control,” Cohen observes.

For veterans of fandom culture like Tabbey and Mariah, the current moment feels like a distressing regression. They recall hard-won strides regarding celebrity boundaries and privacy, now being undermined by a technology that reduces people to malleable digital dolls. They worry younger fans, growing up with this technology, are developing a dehumanizing view of the artists they claim to adore. “We’re actively being set back in many ways,” Tabbey laments, arguing that older fans have a responsibility to be the “adult” in these conversations, defending the basic premise that celebrities are real people, not two-dimensional playthings for profit-driven digital experimentation.

(Source: The Verge)

Topics

ai deepfakes 95% celebrity exploitation 93% nonconsensual media 90% fan culture 88% sexual harassment 87% celebrity backlash 86% Social Media Engagement 85% misinformation spread 83% ai regulation 82% platform responsibility 80%