The AI Act: Key Takeaways for Cybersecurity Compliance
The EU Artificial Intelligence Regulation requires providers of AI systems to have risk appropriate cybersecurity compliance measures in place to deal with AI threats. Our DEG Group take a closer look at the detail.
What Does the AI Act Mean for Cybersecurity Compliance?
The EU Artificial Intelligence Regulation (the "AI Act") requires providers of AI systems to have risk appropriate cybersecurity compliance measures in place to deal with AI threats. The requirements of the AI Act build on existing security obligations in (i) the General Data Protection Regulation (Regulation (EU) 2016/679) (“GDPR”), (ii) the Data Protection Act, 2018 (“DPA 2018”) and other sector specific legislation.
The Network and Information Security Directive (NIS2), the Cyber Resilience Act, the Digital Operational Resilience Act (DORA) and the EU Cybersecurity Act, are each aimed at strengthening the EU’s cybersecurity regulatory framework. The AI Act is the latest in a suite of legislation generated at EU level, designed to harmonise cybersecurity compliance across the 27 EU member states.
Some legislation, such as DORA, enhances targeted cyber security obligations on specific sectors. Financial entities regulated by the Central Bank of Ireland and in scope information and communication technology (ICT) entities, are subject to DORA. Regulated entities are likely to recognise similarities between a number of key DORA requirements, the AI Act and existing Central Bank guidance in relation to IT and cybersecurity risks, outsourcing and operational resilience. The good news is that in scope entities can leverage existing compliance measures to meet with new ones.
The practical implication of the AI Act from a cyber security viewpoint is that actions previously considered to be prudent from a corporate governance point of view, will shortly be a legal requirement.
Failure to implement the measures will expose those to whom they apply to significant enforcement and penalties, as well as reputational concerns. Examples of compliance measures that will now be a legal obligation include;
- the establishment of a written cyber incident response plan or policy;
- the conduct of periodic cyber risk assessments, including (importantly) for third party vendors; and
- the performance of targeted cyber security vulnerability assessments.
What Stage of the Legislative Process is the AI Act at?
On 21 May 2024, the EU Council approved the AI Act. This marks the final step in the legislative process, following the European Parliament’s approval of the landmark law on 13 March 2024 after extensive negotiations with EU Member States. The final text of the AI Act will be published in the coming weeks in the Official Journal of the EU. The AI Act is expected to come into effect before the end of June 2024.
How Does the AI Act Categorise Risk and What are the Penalties in Relation to Each Category?
High risk AI systems will generate significant compliance challenges, including cyber security compliance challenges. There are 7 overarching compliance requirements for AI systems provided for in the AI Act, as follows;
- Risk management system
- Accuracy, robustness and cybersecurity
- Data and data governance
- Human oversight
- Transparency and provision of information to users
- Record keeping
- Technical documentation
5 Top Tips for Business
1. Map and document the AI systems used in your business;
2. Establish whether your business is one or more of the following - a provider, deployer, importer, distributor or user of AI systems;
3. Establish an AI/ cybersecurity/ digital economy implementation team that has responsibility for relevant legal obligations;
4. Risk assess the businesses’ AI and Cyber Risk Maturity – identify the gaps and plan mitigation steps; and
5. Assess third party risk.