IAB Unveils New AI Transparency Framework

▼ Summary
– The IAB has released a new AI Transparency and Disclosure Framework to guide when AI use in advertising should be disclosed to consumers.
– The framework uses a risk-based approach, requiring disclosure only when AI materially affects authenticity or representation in a way that could mislead.
– Disclosure is expected for content like AI-generated depictions of real events, synthetic voices of real people, or digital twins in fabricated scenarios.
– Routine AI uses, such as editing or optimization in background workflows, do not automatically require consumer-facing disclosure.
– The framework proposes a two-layer model with consumer-facing labels (like badges) and machine-readable metadata to ensure practical implementation across platforms.
The Interactive Advertising Bureau (IAB) has introduced a new framework designed to clarify when marketers must reveal their use of artificial intelligence, addressing growing industry uncertainty. This practical guide helps brands, agencies, and publishers manage generative AI by focusing on consumer impact and transparency, rather than enforcing universal disclosure rules. It aims to build trust by ensuring audiences are not misled about the authenticity of the content they encounter.
David Cohen, IAB’s CEO, emphasized the timing of this initiative. He stated that generative AI represents a pivotal moment, transforming workflows from concept to measurement. However, he warned that failing to establish proper transparency could erode the fundamental trust necessary for effective marketing. The framework centers on a straightforward principle: disclosure is required only when AI meaningfully alters a consumer’s perception of what they are seeing, hearing, or engaging with.
Specific scenarios demanding disclosure include AI-generated images or videos of real-world events, synthetic voices of real people saying things they never actually said, digital twins shown in fabricated situations, and conversational avatars in advertisements that mimic human interaction. The goal is to prevent deception regarding identity, authenticity, or representation.
Critically, the framework does not mandate a disclosure for every AI application. Common practices like AI-assisted editing, optimization tools, or backend workflow automation do not automatically need labeling. This risk-based approach seeks to prevent consumer fatigue from excessive notifications while still safeguarding against material misinformation.
To ensure the system functions across various channels, the IAB proposes a two-layer model. The first layer is consumer-facing, utilizing standardized labels, badges, icons, or watermarks placed near the ad content. The second involves a machine-readable layer, employing metadata standards like C2PA to facilitate technical compliance and enable transparency through the advertising supply chain.
For marketing teams, this framework is more than a simple compliance checklist. It serves as a strategic tool for responsibly scaling AI adoption. As scrutiny from regulators, platforms, and the public intensifies, a shared industry standard provides clearer guidance. It allows professionals to navigate the balance between innovative speed, creative expression, and ethical responsibility without having to guess where the line should be drawn.
(Source: MarTech)





