EU AI Act Finalised
On 21 May 2024, the EU Council approved the EU Artificial Intelligence Regulation which marks the final step in the legislative process. Our DEG team take a more in-depth look at what is being regulated, key obligations, penalties and more in this detailed overview.
Introduction
On 21 May 2024, the EU Council approved the EU Artificial Intelligence Regulation (the "AI Act"). This marks the final step in the legislative process, following the European Parliament’s approval of the landmark law on 13 March 2024 after extensive negotiations with EU Member States. The final text of the AI Act will be published in the coming weeks in the Official Journal of the EU.
What is Being Regulated?
The AI Act defines an "AI System" as "a machine-based system designed to operate with varying levels of autonomy and may exhibit adaptiveness after deployment and that, for a given set of explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence real or virtual environments."
The AI Act also introduces dedicated rules for general purpose AI (“GPAI”) Models, which are models that display significant generality, are capable of performing a wide range of distinct tasks, and can be integrated into a variety of downstream systems or applications.
Scope of the AI Act - Who is Impacted?
The AI Act will apply to different players across the AI distribution chain, including the following:
- AI providers – those who develop AI systems or have them developed for them;
- AI deployers – those who use AI systems (except personal use);
- Importers and distributors of AI;
- AI product manufacturers;
- Authorised representatives of AI providers who are not established in the EU; and
- Affected persons located in the EU.
The AI Act has extra-territorial scope, and may apply to businesses not established in the EU. The AI Act will apply to providers located within the EU or in a third country, in circumstances where they make an AI system or GPAI model available on the EU market. In addition, where only the output generated by the AI system is used in the EU, the AI Act will apply to the provider and deployer of the AI system.
Non-EU providers of GPAI models and high-risk AI systems are required to appoint an AI representative in the EU to act as a contact point for EU regulators.
Risk-based Approach
The EU has taken a risk-based approach to the regulation of AI. The higher the risk of harm to society, the stricter the rules. The AI Act establishes four categories of AI systems based on the probability of an occurrence of harm and the severity of that harm:
1. Prohibited AI Systems – These are AI systems that pose an unacceptable level of risk to individuals' safety, rights, or fundamental values. These systems are banned for use in the EU under the AI Act. Examples include social scoring, compiling facial recognition databases, and real-time biometric identification in publicly accessible spaces (subject to certain exceptions).
2. High-Risk AI Systems – AI systems that fall under this category have a high potential to cause significant harm or infringement of rights. They require strict regulation and oversight to mitigate risks. They include AI systems used in critical infrastructures, education, employment, essential private and public services, law enforcement, border control management and administration of justice.
3. Limited Risk AI Systems – These AI systems present lower risks. They still need to adhere to certain safeguards, however, the regulatory requirements for these systems are less stringent. An example of a limited risk AI system is an AI-powered customer service chatbot used to provide automated responses to customer questions.
4. Minimal Risk AI Systems – The AI systems in this category pose minimal risks to individuals' rights, safety, or societal values and are therefore subject to lighter regulatory burdens. For example, basic email filters that classify messages as spam, with a low likelihood of negative impact.
GPAI Models
The AI Act provides specific rules for (i) GPAI models and for (ii) GPAI models that pose “systemic risk”. GPAI models not posing systemic risks will be subject to limited requirements, such as with regard to transparency. However, providers of GPAI models that pose systemic risk will be subject to increased obligations, including performing model evaluation, assessing and mitigating possible systemic risks, ensuring an adequate level of cybersecurity protection, and reporting serious incidents to the AI Office and, as appropriate, national authorities.
A New Governance Structure
To ensure proper enforcement of the new rules, several governing bodies are being established, including:
- An EU AI Office within the EU Commission to enforce the common rules across the EU. The EU Commission has confirmed that this AI Office will not affect the powers of the relevant national authorities and other EU bodies responsible for supervising AI systems;
- A scientific panel of independent experts to support the enforcement activities;
- An AI Board with Member States’ representatives to advise and assist the EU Commission and Member States on consistent and effective application of the AI Act; and
- An advisory forum for stakeholders to provide technical expertise to the AI Board and the EU Commission.
Provider Obligations
Providers of high-risk AI systems must, among other things:
- ensure the AI systems are compliant with the AI Act;
- have a quality management system in place;
- keep specific documentation;
- keep the logs automatically generated by the high-risk AI system;
- carry out conformity assessments and prepare declarations of conformity for each high-risk AI system; and
- comply with registration obligations.
Deployer Obligations
Where businesses are acting as deployers of high-risk AI systems, they are subject to the following obligations:
- take appropriate technical and organisational measures to ensure compliance with provider instructions;
- allocate human oversight to natural persons who are competent, properly qualified and resourced;
- ensure input data is relevant and sufficiently representative (to the extent the deployer exercises control over it);
- monitor the operation of the high-risk AI system and report incidents to the provider and relevant national supervisory authorities;
- keep records of logs generated by the high-risk AI system (if under the deployer's control) for at least six months;
- cooperate with relevant national competent authorities; and
- complete a fundamental rights impact assessment before using a high-risk AI system.
Transparency Obligations
Providers and deployers of certain AI systems and GPAI models are also subject to transparency obligations to:
- ensure that users are aware that they are interacting with AI;
- inform users when emotion recognition and biometric categorisation systems are being used; and
- label AI-generated content as such.
Penalties
The AI Act imposes significant fines for non-compliance with its obligations, which are split into three tiers:
- up to €35 million or 7% of total worldwide turnover, whichever is higher, for non-compliance with the provisions on prohibited AI practices;
- up to €15 million or 3% of total worldwide turnover, whichever is higher, for non-compliance with specified obligations of various operators of AI systems and infringements of the AI Act (including infringement of the rules on GPAI models); and
- up to €7.5 million or 1% of total worldwide turnover, whichever is higher, for the supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities.
However, for small and medium-sized enterprises (“SMEs”), including start-ups, the AI Act allows for the lower scale of penalties to be applied and requires that the interests of SMEs and their economic viability be taken into account when imposing fines.
When Will the AI Act Come Into Force?
The AI Act will be published in the EU Official Journal in the coming weeks, and will enter into force 20 days after publication. The AI Act will be fully applicable 24 months after entry into force, with a graduated approach as follows:
Systems Placed on the Market or Put Into Service Before the AI Act Enters Into Force
There are some further exceptions to the 24 month timeline for full applicability of the AI Act, specifically for systems that have been placed on the market or put into service before the end of this period. Providers of GPAI models that have been placed on the market before 12 months from the AI Act's entry into force will have 36 months from the date of entry into force by which to comply.
Operators of AI systems that are part of large-scale IT systems used in the areas of freedom, security and justice, and are on the market or put into service no later than three years after the AI Act enters into force, have until 31 December 2030 to comply with the AI Act. However, the prohibition on certain AI systems still applies, whereby these systems must no longer be used after six months of the AI Act's entry into force.
Providers and deployers of high-risk AI systems that are intended to be used by public authorities have six years after the AI Act's entry into force to be compliant. Operators of high-risk AI systems that are on the market or put into service before the general 24 month timeframe will only be regulated under the AI Act if the systems are subject to significant changes in their designs after this timeframe. Again, however, with the exception of prohibited systems.
How to Prepare?
While the AI Act has yet to enter into force, it would be prudent for businesses that use and develop AI to start taking active steps to prepare for the new legislative regime and its onerous obligations. Companies should undergo a complete review of their practices to identify any existing or proposed AI elements and ensure that the procedures and measures implemented align with the requirements of the AI Act.