Introduction to the EU AI Act
1. Introduction
Rapid developments in AI in recent years have been accompanied by a corresponding growth in public concerns about the safety of unregulated AI systems. As a result, AI governance and regulation has grown in prominence with corresponding acceleration in its pace of development.
While several EU laws (e.g., the General Data Protection Regulation (GDPR)) already apply to AI applications, the AI Act is the EU’s first comprehensive horizontal, cross-sectorial regulation focussing on AI. The AI Act addresses fundamental rights and safety risks stemming from the development, deployment, and use of AI systems within the EU. The primary goals of the AI Act are to ensure the responsible and ethical use of AI technologies while fostering innovation and competitiveness in the EU. Another objective is to avoid fragmentation of the EU single market by setting harmonised rules on the development and placing on the market of ‘lawful, safe and trustworthy AI systems’ thereby ensuring legal certainty for all actors in the AI supply chain.
2. Scope of the AI Act
In essence, the AI Act regulates entry to the EU single market. Companies and state authorities that provide or deploy AI systems in the EU must comply with the rules set out in the AI Act. The AI Act also has extraterritorial effect, because it will apply whenever an AI-based system is used in the EU, regardless of where the provider is based; or whenever an output of such a system is used within the EU, regardless of where the deployer/provider is based. However, the AI Act will not apply to AI systems which are used “exclusively for military, defence or national security purposes” or to "AI systems used for the sole purpose of research and development".
3. Risk-Based Approach of the AI Act
The AI Act adopts a risk-based approach, categorizing AI systems into different risk levels based on their potential impact on fundamental rights, health and safety, and societal wellbeing. This classification includes four categories of risk ("unacceptable", "high", "limited" and "minimal"), plus one additional category for general-purpose AI (“GPAI”).
3.1. Prohibited AI Systems
AI applications deemed to represent unacceptable risks are banned in Europe. These include:
- biometric categorisation systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race);
- untargeted scraping of facial images from the Internet or CCTV footage to create facial recognition databases;
- emotion recognition in the workplace and educational institutions;
- social scoring based on social behaviour or personal characteristics;
- manipulation of human behaviour to circumvent free will;
- exploiting the vulnerabilities of people (due to their age, disability, social or economic situation);
- risk assessing persons for predictive policing;
- “real-time” remote biometric systems used in publicly accessible spaces by law enforcement unless for searching for specific missing persons or victims of abduction, human trafficking, sexual exploitation; preventing a terrorist threat or locating or identifying a suspect of a criminal offence – such use being subject to prior authorisation by judicial/independent administrative authority and with temporal and geographic limitations. However, Ireland has secured an exemption from this obligation for activities in the field of police co-operation and judicial co-operation in criminal matters.
3.2. High Risk AI Systems
There are two categories of high-risk AI systems, namely:
I) AI systems used in products already subject to EU product safety law such as toys, motor vehicles, IoT devices, medical devices and diagnostic medical devices;
II) AI systems falling into the following specific areas:
- remote biometric identification systems apart from confirming an asserted identity;
- biometric categorisation according to sensitive/protected characteristics;
- emotion recognition systems;
- critical infrastructure, including critical digital infrastructure, road traffic and the supply of water, gas, heating and electricity;
- access to education, learning outcome evaluation, monitoring behaviour of students during exams;
- employee recruitment and worker management;
- access to public and private services (including healthcare, credit scoring, first response deployment and patient triaging, life and health insurance);
- law enforcement (including assessing risk of a person becoming a crime victim, evaluating evidence reliability, assessing risk of offending/reoffending based on profiling of a person);
- migration, asylum and border control management;
- administration of justice and democratic processes;
AI systems deemed to be high risk are required to undergo extensive evaluation before being introduced to the market and ongoing monitoring throughout their operational lifecycle. Specifically, high-risk AI systems must comply with comprehensive obligations regarding risk mitigation, data governance, detailed documentation, human oversight, transparency and provision of information to users, robustness, accuracy, and cybersecurity. Such AI systems may also be required to undergo fundamental rights impact assessments.
High-risk AI systems will also be subject to conformity assessments to evaluate their compliance with the Act. Conformity assessments may be done by self-assessment or by High-risk AI systems will also be subject to conformity assessments to evaluate their compliance with the Act. Conformity assessments may be done by self-assessment or by third parties (i.e. a notifying body appointed by EU member states under the AI Act). Notifying bodies may also carry out audits to check whether a conformity assessment is carried out properly.
3.3. Limited Risk AI Systems
AI applications classified as being limited-risk, such as chatbots, certain emotion recognition and biometric categorization systems and systems for generating deep fakes are only subject to transparency obligations. These include informing users that they are interacting with an AI system; and marking, in machine-readable format, synthetic audio, video, text and images content as being artificially generated or manipulated for users.
3.4. Minimal/No Risk AI Systems
AI systems representing minimal risks are not regulated. Instead, stakeholders are encouraged to build codes of conduct for these systems.
3.5. General Purpose AI Systems
In recent trilogue negotiations, an amended tiered approach was agreed for obligations of GPAI systems/models. The first tier applies to all GPAI models. It requires providers to adhere to transparency requirements by drawing up technical documentation; to draw up detailed information for providers of AI systems who intend to integrate the GPAI models in their AI systems; to comply with EU copyright law; and to provide detailed summaries about the content used for training.
The second tier applies to GPAI models with systemic risk. These GPAI models are subject to more stringent obligations than GPAI models in the first tier. Specifically, in addition to the obligations of the first tier GPAI models, the providers of second tier GPAI models must also conduct model evaluations; assess and mitigate systemic risks; conduct adversarial testing; report serious incidents; and ensure cybersecurity. The second tier GPAI models may comply with the AI Act by adhering to codes of practice, until harmonized EU standards are published.
The threshold between the first and second tiers of GPAI models is established according to the cumulative amount of computing power used to train the models and is specified as 10^25 FLOPs.
4. Penalties for Non-Compliance
Fines for violations of the AI Act will depend on the type of AI system, size of company and severity of infringement and will range from:
- 7.5 million euros or 1.5% of a company's total worldwide annual turnover for the preceding financial year (whichever is higher) for the supply of incorrect information to notified bodies/national competent authorities.
- 15 million euros or 3% of a company's total worldwide annual turnover for the preceding financial year (whichever is higher) for violations of the obligations of providers, importers, distributors and deployers of high risk AI systems; the obligations of the providers of GPAI models; and the obligations of authorised representatives of Ai system providers established outside the EU.
- 35 million euros or 7% of a company's total worldwide annual turnover for the preceding financial year (whichever is higher) for violations of the banned AI applications.
5. Timeline for Compliance
The AI Act is in the final stages of its approval and may come into force around the Summer of 2024. Once the AI Act comes into force, it sets the clock ticking on several deadlines for compliance by different AI systems as discussed below.
Prohibited AI Systems
The ban on unacceptable risk AI systems will apply from 6 months after the AI Act enters into force (i.e. end of 2024 or early 2025).
High Risk AI systems in Products Already Regulated under Existing EU Product Safety Rules These will have 3 years after the AI Act comes into force to be brought in compliance with the AI Act. (i.e. mid 2027).
General Purpose AI Systems
General Purpose AI Systems will have 12 months after the AI Act comes into force to be brought into compliance with the AI Act (i.e. mid 2025).
Legacy High Risk AI Systems
Public authorities which are providers or deployers of high-risk AI systems (other than components of large-scale IT systems) already placed on the market or put into service, have 6 years after the AI Act comes into force, to make their systems compliant (i.e. mid-2030).
Operators of other high risk AI systems already placed on the market or put into service by 2 years after the AI act comes into force (mid-2026), must comply with the requirements of the AI Act only if, from that date, those systems are subject to significant changes in their design.
Legacy General Purpose AI Systems
General Purpose AI Systems that are already placed on the market by 12 months after the AI Act comes into force, will have 2 years after the date (resulting in 3 years in total after the AI Act comes into force) to be brought in compliance with the AI Act (i.e. mid 2027).
Other AI Systems
Other AI Systems will have 2 years after the AI Act comes into force to be brought into compliance with the AI Act (i.e. mid-2026).
DISCLAIMER
This article is for informational purposes only.This article is not intended to, and does not, provide legal advice or a legal opinion.
It will also be acknowledged that this article is prepared based on a provisionally approved final compromise text of the AI Act which may undergo further amendment before being finally adopted by the European Parliament. Accordingly, specific aspects of this article may become out of date and no longer accurate depending on any corresponding further amendments to the AI Act prior to its final adoption.