The European Union Agency for Cybersecurity (ENISA) publishes an assessment of standards for the cybersecurity of AI and issues recommendations to support the implementation of upcoming EU policies on Artificial Intelligence (AI).
This report provides an overview of standards – published, under development and planned - and an assessment of their span for the purpose of identifying potential gaps.
EU Agency for Cybersecurity Executive Director, Juhan Lepassaar, declared: “Advanced chatbot platforms powered by AI systems are currently used by consumers and businesses alike. The questions raised by AI come down to our capacity to assess its impact, to monitor and control it, with a view to making AI cyber secure and robust for its full potential to unfold. Using adequate standards will help ensure the protection of AI systems and of the data those systems need to process in order to operate. I trust this is the approach we need to take if we want to maximise the benefits for all of us to securely enjoy the services of AI systems to the full.”
This report focuses on the cybersecurity aspects of AI, which are integral to the European legal framework regulating AI, proposed by the European Commission last year dubbed as the “AI Act“.
What is Artificial Intelligence?
The draft AI Act provides a definition of an AI system as “software developed with one or more (…) techniques (…) for a given set of human-defined objectives, that generates outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.” In a nutshell, these techniques mainly include: machine learning resorting to methods such as deep learning, logic, knowledge-based and statistical approaches.
It is indeed essential for the allocation of legal responsibilities under a future AI framework to agree on what falls into the definition of an 'AI system'.
However, the exact scope of an AI system is constantly evolving both in the legislative debate on the draft AI Act, as well in the scientific and standardisation communities.
Although broad in contents, this report focuses on machine learning (ML) due to its extensive use across AI deployments. ML has come under scrutiny with respect to vulnerabilities particularly impacting the cybersecurity of an AI implementation.
AI cybersecurity standards: what’s the state of play?
As standards help mitigate risks, this study unveils existing general-purpose standards that are readily available for information security and quality management in the context of AI. In order to mitigate some of the cybersecurity risks affecting AI systems, further guidance could be developed to help the user community benefit from the existing standards on AI.
This suggestion has been based on the observation concerning the software layer of AI. It follows that what is applicable to software could be applicable to AI. However, it does not mean the work ends here. Other aspects still need to be considered, such as:
- a system-specific analysis to cater for security requirements deriving from the domain of application;
- standards to cover aspects specific to AI, such as the traceability of data and testing procedures.
Further observations concern the extent to which the assessment of compliance with security requirements can be based on AI-specific horizontal standards; furthermore, the extent to which this assessment can be based on vertical/sector specific standards calls for attention.
Key recommendations include:
- Resorting to a standardised AI terminology for cybersecurity;
- Developing technical guidance on how existing standards related to the cybersecurity of software should be applied to AI;
- Reflecting on the inherent features of ML in AI. Risk mitigation in particular should be considered by associating hardware/software components to AI; reliable metrics; and testing procedures;
- Promoting the cooperation and coordination across standards organisations’ technical committees on cybersecurity and AI so that potential cybersecurity concerns (e.g., on trustworthiness characteristics and data quality) can be addressed in a coherent manner.
Regulating AI: what is needed?
As for many other pieces of EU legislation, compliance with the draft AI Act will be supported by standards. When it comes to compliance with the cybersecurity requirements set by the draft AI Act, additional aspects have been identified. For example, standards for conformity assessment, in particular related to tools and competences, may need to be further developed. Also, the interplay across different legislative initiatives needs to be further reflected in standardisation activities – an example of this is the proposal for a regulation on horizontal cybersecurity requirements for products with digital elements, referred to as the “Cyber Resilience Act”.
Building on the report and other desk research as well as input received from experts, ENISA is currently examining the need for and the feasibility of an EU cybersecurity certification scheme on AI. ENISA is therefore engaging with a broad range of stakeholders including industry, ESOs and Member States, for the purpose of collecting data on AI cybersecurity requirements, data security in relation to AI, AI risk management and conformity assessment.
AI and cybersecurity will be discussed in two dedicated panels:
- in the ENISA Certification Conference, on 25 May, in Athens, Greece
- in the ENISA AI Conference, on 7 June, in Brussels, Belgium.
ENISA advocated the importance of standardisation in cybersecurity today, at the RSA Conference in San Francisco in the ‘Standards on the Horizon: What Matters Most?’ in a panel comprising the National Institute of Standards and Technology (NIST).
Further information
Cybersecurity of AI and standardisation – 2023 ENISA report
Securing Machine Learning Algorithms – 2021 ENISA report
The proposal Cyber Resilience Act
Contact
For press questions and interviews, please contact press (at) enisa.europa.eu