AI Experts Demand More Supply Chain Transparency

▼ Summary
– Experts advocate for greater transparency in AI supply chains to address security and privacy challenges as GenAI adoption grows.
– The AI Bill of Materials (AIBOM) is proposed to document AI system components, data sources, and training methods for risk mitigation and accountability.
– SBOM adoption is increasing, with 22% of organizations currently using them, though challenges like tool diversity hinder broader implementation.
– The G7 Cybersecurity Working Group plans to develop a joint vision on AI security, including AIBOMs, by 2025, highlighting global interest in standardization.
– Organizations like the Linux Foundation, CISA, and OWASP are working on AIBOM standardization, with efforts ranging from integrating AI into SBOMs to creating dedicated guides.
The push for greater transparency in AI supply chains is intensifying as businesses worldwide adopt generative AI at unprecedented rates. With this rapid expansion comes mounting concerns about security vulnerabilities and compliance risks, prompting experts to advocate for standardized documentation frameworks. Among the proposed solutions, the AI Bill of Materials (AIBOM) has emerged as a leading contender, a structured approach to cataloging the building blocks of AI systems, from training data to algorithms.
This concept borrows from the Software Bill of Materials (SBOM), a well-established practice in cybersecurity that inventories software components to enhance transparency. At a recent Software Supply Chain Security Summit, industry leaders highlighted the growing adoption of SBOMs, with 22% of organizations already using them and another 4% planning to implement them soon. Despite this progress, challenges remain, nearly 80% of businesses struggle to generate SBOMs due to fragmented tools and processes.
The momentum behind AIBOMs has already reached global policymakers. Allan Friedman, a former CISA strategist, revealed that the G7 Cybersecurity Working Group has prioritized AI security, including AIBOM development, as part of its 2025 agenda. While diplomatic efforts are underway, Friedman stressed that cybersecurity professionals must take the lead in shaping practical standards. “Transparency must be meaningful,” he cautioned, warning against premature implementation without clear guidelines.
Standardization efforts are gaining traction across the industry. The Linux Foundation has integrated AIBOM support into its latest SPDX 3.0 framework, while CISA launched an AI SBOM working group to help organizations adapt existing practices. Meanwhile, OWASP is developing an operational guide for AIBOM best practices, slated for release in late 2025.
Debates continue over whether AI dependencies should be tracked separately or folded into existing SBOMs. Sajeeb Lohani of Bugcrowd argues for consolidation, emphasizing that “AI is fundamentally software” and should follow similar transparency protocols. As the conversation evolves, one thing is clear: without standardized visibility into AI supply chains, businesses risk exposure to unchecked vulnerabilities and regulatory pitfalls. The race to define these frameworks will shape the future of secure, accountable AI deployment.
(Source: Info Security)