AI & TechArtificial IntelligenceBusinessNewswireTechnology

Deepfake Influencers Are Selling Supplements Online

Originally published on: March 17, 2026
▼ Summary

– An AI-generated persona named “Melanskia” is used to sell the Modern Antidote wellness supplement, representing a trend of synthetic influencers in marketing without clear disclosure.
– Researchers warn that realistic AI personalities blur marketing and authenticity, with studies showing people overestimate their ability to detect AI, increasing deception risks.
– The wellness market is particularly vulnerable to this AI-driven deception, as authenticity heavily influences sales and AI allows cheap, efficient testing of countless digital spokespeople.
– Regulators are responding with laws requiring AI content disclosure, and data shows brand partnerships with AI influencers have recently declined.
– The owner of Modern Antidote predicts AI-generated content will soon become so commonplace that distinguishing real from AI won’t matter.

The rise of AI-generated personalities selling health supplements online represents a significant shift in digital marketing, raising serious questions about consumer trust and regulation. These synthetic influencers, often portrayed as relatable wellness gurus, are being deployed without clear disclosure, making it difficult for audiences to distinguish between authentic recommendations and sophisticated advertising. One prominent example is “Melanskia,” an Amish woman who criticizes processed foods while promoting a $50 detox powder; she is entirely fictional. This avatar is part of a campaign for Modern Antidote, a wellness brand that relies heavily on fabricated personas across Instagram, TikTok, and Facebook. The strategy includes other characters like a Tibetan monk and a series of similar, muscular middle-aged men.

The owner of Modern Antidote, Josemaria Silvestrini, describes this approach as a “game changer,” emphasizing that artificial intelligence is transforming every aspect of the business. This trend highlights a broader movement within the wellness industry, where perceived authenticity and personal identity are crucial for driving sales. By using AI, companies can inexpensively test countless digital spokespeople until they find one that connects with a target audience, a process hailed for its tremendous efficiency. However, this efficiency comes with considerable risk. Researchers caution that as AI-generated faces and personalities become more realistic, the line between marketing and genuine content blurs dangerously.

A study published in the British Journal of Psychology in February reveals a troubling confidence gap: individuals often overestimate their ability to spot AI-generated faces, leaving them susceptible to deception as the technology advances. Timothy Caulfield, research director at the University of Alberta’s Health Law Institute, notes that this vulnerability is particularly acute in the wellness market. The potential for misuse is clear, as synthetic influencers can be crafted to exploit specific cultural or community aesthetics, like the Amish or Tibetan monk archetypes, to build false trust.

In response to these developments, regulators are starting to act. Several states have enacted laws requiring the disclosure of AI-generated content, aiming to provide consumers with necessary transparency. Meanwhile, the commercial appeal of artificial influencers may already be encountering some limits. Data from the influencer-marketing platform Collabstr indicates that brand partnerships with AI social accounts fell by approximately 30% in the first eight months of 2025 compared to the same period the previous year. This decline suggests that brands or audiences might be experiencing a degree of skepticism or fatigue.

Despite these regulatory and market signals, proponents like Silvestrini believe resistance is temporary. He predicts that distinguishing real from AI-generated content will soon cease to be a primary concern for consumers, as synthetic media becomes overwhelmingly commonplace. This normalization poses a fundamental challenge for consumer protection, suggesting that the future of online influence may be dominated by convincing digital entities whose primary purpose is to sell products under the guise of authentic human experience.

(Source: Newser)

Topics

ai influencers 95% wellness marketing 90% digital deception 85% synthetic media 80% consumer vulnerability 80% marketing efficiency 75% regulatory disclosure 75% ai regulation 70% authenticity perception 70% Social Media Marketing 65%