TikTok’s AI ad policy fails to stop misleading content

▼ Summary
– The author suspects many TikTok ads use AI-generated content but often lack the required disclosure labels, making verification difficult.
– Samsung ran a TikTok ad campaign for the Galaxy S26 Ultra without AI labels, despite labeling the same AI-generated videos on YouTube.
– Both TikTok and Samsung are members of the Content Authenticity Initiative, which promotes transparency, yet their ad practices appear inconsistent with these ideals.
– TikTok’s advertising policy requires clear labeling for significantly AI-modified content, but enforcement seems inconsistent, as seen with Samsung and later with a Cazoo ad that was belatedly labeled.
– The lack of reliable, scalable technology to identify AI content, combined with poor platform-advertiser accountability, undermines transparency in a regulated industry.
Distinguishing between authentic and AI-generated advertisements on social media has become a daily challenge. Despite spending significant time analyzing visual content for the subtle hallmarks of synthetic media, I often find myself suspicious of promotions in my TikTok feed. For weeks, I observed no examples bearing the mandatory AI disclosure required by the platform’s own advertising policies, leaving me without confirmation. The core issue is straightforward: someone knows whether this content is AI-generated, but they are choosing not to inform the audience. For companies that publicly endorse AI labeling initiatives, meaningful action is necessary to match their stated ideals.
Consider the case of Samsung. After the company disseminated clearly AI-generated videos across its social channels, I began seeing TikTok ads for the Galaxy S26 Ultra’s privacy display. Videos from what appeared to be the same campaign were published on YouTube with disclosures in the description noting AI tools were used. The TikTok versions, however, provided no such indication. Even non-promoted videos on Samsung’s TikTok account lacked labels, despite identical content being tagged as AI-generated on YouTube. This discrepancy is notable because both Samsung and TikTok are members of the Content Authenticity Initiative, a coalition advocating for scalable transparency through standards like C2PA. Their shared membership suggests a common commitment to labeling. If Samsung knowingly used AI, it should have disclosed this to TikTok during ad submission. If TikTok was informed, its policies obligated it to ensure user awareness.
TikTok’s business advertising policy explicitly states that advertisers may only use content “significantly” edited or created by AI if they disclose it. Acceptable methods include applying TikTok’s own AI label or adding a chosen disclaimer, caption, watermark, or sticker. The policy defines significant modification broadly, covering completely synthetic imagery, showing a subject doing something they did not do, or using AI voice-cloning to make them say something they never said. Given these clear rules, the absence of labels on prominent campaigns raises questions about enforcement.
When asked for comment, Samsung did not respond. TikTok directed me to its published AI labeling requirements and its C2PA partnership but declined to provide a statement on why Samsung’s ads appeared without disclosure. The breakdown in this transparency process remains unclear. A recent development involved UK-based retailer Cazoo. Ads I had previously seen without any label now display a small “advertiser labeled as AI-generated” tag beside the “Ad” identifier. I had suspected these ads were synthetic due to irrational visual distortions, like a dentist’s drill morphing shapes erratically. It is uncertain if Samsung’s ads have received similar retroactive labeling, as they have not appeared in my feed recently. The state of AI transparency across Samsung’s TikTok presence is inconsistent: some videos have TikTok’s official label, others feature a manual disclosure in fine print, and many have none at all.
No perfect technological solution exists today for reliably identifying AI-generated content at scale. Provenance systems like C2PA Content Credentials or SynthID require universal adoption to be effective, a scenario far from reality. This creates a significant problem as people grapple with discerning truth in a complex information environment. However, advertising operates under a different premise; it is a regulated industry with established rules designed to protect consumers from deception. Historical regulations, like those preventing cosmetics brands from using false lashes in mascara ads, set a precedent for honesty. Influencers have learned that audiences react poorly to dishonest promotion, and platforms face growing legal pressure. The EU, China, and South Korea have all introduced AI labeling requirements for advertisements, meaning companies risk substantial fines for non-compliance regardless of their voluntary pledges.
If major platforms like TikTok and advertisers like Samsung cannot uphold basic honesty about AI in this regulated context, it sets a dangerous precedent. It suggests anyone can advertise synthetic content without consequence. While it is encouraging to see some ad-specific labels finally appearing after direct scrutiny, this should not rely on users policing their own feeds. A simple two-way system between advertiser and platform should be robustly implemented and enforced proactively. Trust in digital advertising depends on it.
(Source: The Verge)




