Connect with us

Hi, what are you looking for?

AI Cybersecurity

ETSI Launches EN 304 223 Standard Defining Cybersecurity for AI Systems

ETSI introduces the globally applicable EN 304 223 standard, setting baseline cybersecurity requirements for AI systems to enhance security and trust in technology.

The European Telecommunications Standards Institute (ETSI) has introduced a new standard, ETSI EN 304 223, which aims to establish baseline cybersecurity requirements for artificial intelligence (AI) systems. This framework, approved by national standards bodies, becomes the first globally applicable European Norm specifically focused on securing AI technologies, extending its implications beyond European markets.

The standard addresses unique security risks associated with AI that are not present in traditional software systems. Threats such as data poisoning, indirect prompt injection, and vulnerabilities tied to complex data management necessitate the development of specialized defenses. ETSI EN 304 223 integrates established cybersecurity practices with targeted measures that cater to the distinctive characteristics of AI models and systems.

Adopting a full lifecycle perspective, the ETSI framework delineates thirteen principles spanning secure design, development, deployment, maintenance, and end-of-life considerations. This holistic approach aligns with internationally recognized AI lifecycle models, thereby enhancing interoperability and promoting consistent implementation across existing regulatory and technical ecosystems.

ETSI EN 304 223 is intended for a broad spectrum of organizations within the AI supply chain, including vendors, integrators, and operators. It encompasses systems based on deep neural networks, including generative AI, thereby addressing a wide array of applications and industries. The standard’s introduction is timely, as the proliferation of AI technologies raises significant concerns regarding security and ethical use.

As organizations increasingly rely on AI systems, the need for robust cybersecurity measures becomes paramount. The complexities involved in managing AI systems, especially those that utilize large datasets and complex algorithms, amplify the stakes when it comes to confidentiality and integrity. The new standard underscores the importance of proactive measures in mitigating risks associated with AI.

Further guidance is anticipated through ETSI Technical Report 104 159, which will specifically focus on the risks associated with generative AI, including deepfakes, misinformation, confidentiality concerns, and intellectual property protection. This forthcoming report aims to provide additional resources and clarity for organizations navigating the evolving landscape of AI security.

As the global AI landscape continues to evolve, the introduction of ETSI EN 304 223 represents a significant step forward in addressing cybersecurity challenges in this domain. By setting clear standards and principles, ETSI aims to foster a more secure environment for the deployment of AI technologies, paving the way for broader adoption and trust in innovative solutions.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.