Artificial IntelligenceCybersecurityNewswireTechnology

Europe Sets New AI Security Standards

▼ Summary

– ETSI has released a new European Standard, ETSI EN 304 223, establishing baseline cybersecurity requirements for AI models and systems in real-world use.
– The standard treats AI as a distinct security category, addressing risks like data poisoning and model obfuscation specific to AI data pipelines and behavior.
– It provides a structured set of requirements aligned with five phases of the AI lifecycle, from secure design to secure end of life.
– The standard’s scope covers systems using deep neural networks, including generative AI, and is intended for use by vendors, integrators, and operators across the supply chain.
– Developed through international collaboration, the framework aims to provide clear, practical guidance for building resilient and trustworthy AI systems.

A new European standard has been established to tackle the unique cybersecurity challenges posed by artificial intelligence systems in operational environments. The European Telecommunications Standards Institute (ETSI) has published ETSI EN 304 223, a framework that sets baseline security requirements for AI models and systems. This initiative directly addresses the distinct vulnerabilities that arise from AI’s complex data pipelines, model behaviors, and deployment practices, moving beyond traditional IT security to create a specialized protective approach.

The standard recognizes that AI technology introduces a novel category of security risks. These include threats like data poisoning, model obfuscation, and indirect prompt injection, which are intrinsically linked to how AI systems are trained and function. By integrating established cybersecurity best practices with measures tailored for these AI-specific dangers, the framework provides a comprehensive structure for securing AI throughout its entire existence.

The requirements are organized around a lifecycle model, covering five critical phases: secure design, secure development, secure deployment, secure maintenance, and secure end of life. For each of these phases, the standard outlines 13 core principles and requirements. To aid implementation, it references other internationally recognized standards and publications, helping organizations align this new guidance with existing protocols within the broader AI ecosystem.

The scope of ETSI EN 304 223 is broad, covering AI systems that utilize deep neural networks, including generative AI models, that are destined for real-world use. This makes the standard relevant for a wide range of stakeholders across the supply chain. Vendors developing AI solutions, system integrators assembling components, and the operators who deploy them can all utilize this document as a shared foundation for security practices. Its development involved collaboration with international organizations, government agencies, and experts from both cybersecurity and AI fields, ensuring its applicability across diverse industries and deployment scenarios.

Industry leaders have welcomed the standard as a critical development. The chair of ETSI’s technical committee for securing AI noted that the framework arrives at a pivotal moment, as AI integration into essential services and infrastructure accelerates. The availability of clear, practical guidance that reflects both the complexity of the technology and real-world operational realities provides organizations with the tools to build confidence. Ultimately, the collaborative effort behind the standard aims to ensure that AI systems can be resilient, trustworthy, and secure by design from their inception.

(Source: HelpNet Security)

Topics

ai security 100% etsi standard 95% ai lifecycle 90% cybersecurity requirements 88% AI Risks 85% secure design 80% secure deployment 80% Generative AI 75% ai supply chain 75% collaborative development 70%