New Standards to Regulate the AI Wild West

▼ Summary
– Technology standardization struggles to keep pace with rapid advancements, but mature systems eventually achieve interoperability, as seen with email and developer tools.
– AI presents unique challenges for standardization due to its fast evolution, abstract nature, and societal impacts like deepfakes, bias, and hallucinations.
– The AI and Multimedia Authenticity Standards Collaboration (AMAS) aims to develop standards for AI-generated content, focusing on trust, authenticity, and human rights.
– Standards organizations are evolving to include non-technical experts like ethicists and legal professionals, shifting from engineering-focused approaches to broader societal considerations.
– Recent AI standards include JPEG Trust for image authenticity and Content Credentials for traceability, with ongoing efforts in digital watermarking and multimedia authentication.
The rapid evolution of artificial intelligence has outpaced traditional regulatory frameworks, leaving a pressing need for standardized guidelines to ensure ethical and secure AI development. Unlike past technologies, AI presents unique challenges—deepfakes, biased algorithms, and misinformation—that extend beyond technical hurdles into societal concerns. Recognizing this, global standards organizations are stepping up efforts to establish comprehensive AI governance.
A coalition of leading standards bodies, including the International Electrotechnical Commission (IEC), International Organization for Standardization (ISO), and International Telecommunication Union (ITU), has formed the AI and Multimedia Authenticity Standards Collaboration (AMAS). This initiative, unveiled at the recent “AI for Good” Global Summit in Geneva, aims to address AI-generated content misuse by developing protocols for transparency, trust, and accountability.
The stakes are high. Without clear standards, AI risks becoming a tool for manipulation rather than innovation. AMAS focuses on key areas like content provenance, digital watermarking, and rights declarations, ensuring users can verify the authenticity of AI-generated media. For instance, the newly released JPEG Trust Part 1 standard embeds metadata directly into image files, allowing for real-time verification of origins and alterations.
But will tech giants and enterprises embrace these standards if they potentially slow down innovation? According to Gilles Thonet, IEC’s deputy secretary-general, market access will be the driving force. “Standards aren’t just technical—they define systems,” he explains. “If AI is to be trusted, we need frameworks that span ethics, legality, and engineering.”
Historically, standards development was dominated by engineers, but today’s committees include ethicists, social scientists, and legal experts—a shift reflecting AI’s broader societal impact. Recent standards like Content Credentials and CAWG Metadata provide structured ways to document ownership and authorship, while upcoming proposals like Digital Watermarking and Trust.txt aim to fortify digital content against tampering.
The challenge lies in balancing innovation with accountability. While AI’s rapid advancements make standardization difficult, the absence of guidelines could erode public trust. As Thonet notes, “Human rights are now part of the conversation—engineers alone can’t decide what’s ethical.”
With deepfake detection, media authentication, and opt-out frameworks in development, the push for AI standards signals a critical step toward responsible technology. The question remains: Can these efforts keep pace with AI’s relentless evolution? For now, the collaboration between technologists and policymakers offers a promising path forward.
For ongoing updates on AI advancements and governance, subscribe to our weekly newsletter.
(Source: zdnet)