Artificial IntelligenceBigTech CompaniesNewswireTechnology

TikTok’s AI Reportedly Runs Racist Ads for Games Without Consent

▼ Summary

– Finji, a video game publisher, alleges TikTok used generative AI to modify its ads without permission, creating unauthorized versions including one with a racist, sexualized stereotype of a character.
– Despite Finji confirming its TikTok ad account had all AI features like “Smart Creative” and “Automate Creative” disabled, the AI-generated ads were still produced and distributed to users.
– The altered ad depicted the character June from “Usual June” with exaggerated, sexualized features, which starkly contradicts her official in-game representation and Finji’s artistic intent.
– TikTok’s support initially denied the issue, then acknowledged it after evidence was provided, but ultimately attributed it to an automated initiative and offered an uncertain opt-out process.
– Finji’s CEO expressed frustration with TikTok’s response, criticizing the platform’s lack of accountability, apology, or systemic change regarding the unauthorized and harmful use of AI on client assets.

The discovery that TikTok’s advertising algorithms can autonomously create and distribute modified ads without a company’s consent has sparked serious concerns about AI ethics and platform accountability. Independent game publisher Finji found itself at the center of this controversy when it learned that TikTok had used generative AI to alter its promotional content, resulting in at least one ad featuring a racist and sexualized caricature of a game character. Despite having all AI tools disabled in its account settings, the company was powerless to prevent or even view these unauthorized alterations, highlighting a significant breach of trust between the platform and its advertising partners.

Finji’s leadership first became aware of the issue through community alerts. Players commenting on the studio’s legitimate ads expressed confusion over strange, off-brand versions appearing in their feeds. Upon investigation, CEO Rebekah Saltsman collected screenshots from users that revealed the problematic content. One particularly egregious example took the official artwork for the game Usual June and transformed the protagonist into a distorted figure, amplifying her hips and thighs while dressing her in a bikini bottom and knee-high boots, a clear invocation of a harmful racial stereotype. This fabricated image stood in stark contrast to the character’s actual design and the values of the studio.

Saltsman immediately contacted TikTok support, providing evidence and confirming that Finji’s ad account had both the “Smart Creative” and “Automate Creative” AI features explicitly turned off. These functions are designed to automatically generate and test different ad combinations to optimize performance. The support agent verified the settings were disabled but could not initially explain how the AI-generated ads were produced. The agent’s suggestion that Finji might have accidentally enabled an automation feature was firmly rebutted with documented proof to the contrary.

The subsequent support dialogue became a frustrating cycle for Finji. After acknowledging the problem and promising an escalation to senior staff, TikTok’s responses shifted. A later message claimed an internal review found “no indication” of AI-generated assets, directly contradicting the screenshots and user reports Finji provided. When pressed, support then changed its story, stating the ads were part of a “broader automated initiative” to improve advertiser return on investment, an initiative Finji was enrolled in without its knowledge or consent.

The platform offered to add Finji to an opt-out list but noted approval was not guaranteed. This lack of a reliable solution, coupled with the refusal to connect the studio with a higher authority, left the company feeling dismissed. TikTok’s internal escalation team ultimately declared the matter resolved, asserting that their previous response contained the final findings. For Finji, this was an unacceptable conclusion to an issue involving the unauthorized, discriminatory alteration of its intellectual property.

The implications extend beyond a single ad campaign. Finji has no way to track or delete these AI-generated variants, and Saltsman suspects other inappropriate ads may be circulating based on user comments. The only definitive action she could take was to terminate the ad campaigns entirely, sacrificing marketing reach to stop the spread of the offensive material. This case exposes a critical flaw in how platforms deploy black-box AI systems, potentially exposing brands to reputational harm without their knowledge or a clear path for recourse.

In a statement, Saltsman expressed profound frustration with TikTok’s handling of the situation, criticizing the “complete lack of appropriate response” and the apparent absence of common sense. She highlighted the dual failure of deploying a biased AI system and then applying it to paying clients’ assets without permission. The expectation of an apology and systemic change has, so far, gone unmet. For many businesses, this incident serves as a cautionary tale about the risks of automated advertising systems that operate outside of direct advertiser control, especially when those systems can produce damaging and discriminatory content.

(Source: IGN)

Topics

ai misuse 95% racial stereotypes 90% sexualized content 90% tiktok advertising 85% unauthorized modifications 85% customer support 80% corporate responsibility 80% indie game development 75% reputational damage 75% Algorithmic Bias 70%