The EU AI Act: An Overview
Key Takeaways
- The EU Artificial Intelligence Act is expected to become law in summer 2024. It will take effect in stages with the majority of provisions affecting most businesses coming into effect after two years.
- The AI Act will introduce a new regulatory aspect of AI governance that will sit alongside existing legal frameworks that have a significant impact on AI, such as data privacy laws, intellectual property laws and anti-discrimination laws.
- Obligations under the AI Act are focused primarily on providers of AI, but businesses using AI (referred to in the AI Act as ‘deployers’) may also need to comply with certain requirements depending on how they are using AI.
- The regulatory requirements under the AI Act vary significantly depending on how the AI is used. The AI Act includes detailed requirements for general-purpose AI and ‘high-risk’ uses of AI, as well as outright prohibitions of certain uses of AI. Other uses of AI are largely unregulated by the AI Act.
- The first step for most businesses will be to evaluate how they use AI, and the types of AI they use or offer, in order to determine how the relevant AI systems are categorized under the AI Act and the relevant regulatory requirements.
- Providers of ‘high-risk’ AI systems and developers of AI systems that are considering ‘high-risk’ use cases should be developing their AI governance strategies at an early stage to ensure their products are able to comply with the AI Act without late-stage remediation.
- Maximum penalties under the AI Act vary based on the obligation breached. Overall, the maximum penalty for non-compliance with the AI Act is the greater of €35 million or 7% of a group’s total worldwide annual turnover for the preceding financial year, but for many obligations the maximum fines are the higher of €15 million or 3% of total worldwide annual turnover.
After a lengthy legislative journey, the EU Artificial Intelligence Act (“AI Act”) is set to become law this summer. Styled by the European Parliament as ‘the world’s first comprehensive AI law’, the requirements of the AI Act vary significantly depending on the ways in which an AI system is to be used. This OnPoint provides an overview of the AI Act and the requirements for different categories of AI.
Who does the AI Act apply to?
The AI Act affects a range of stakeholders within the AI ecosystem. It applies to providers, deployers, importers and distributors of AI systems or general-purpose AI models, as well as product manufacturers that offer AI as part of their product offering.
- Providers - The primary focus of the AI Act is on ‘providers’ of AI systems and models. Broadly, ‘providers’ are organisations supplying AI under their own brand. ‘Providers’ will be subject to the AI Act if: (a) they put their AI on the market in the EU, or (b) the output of their AI system is used in the EU.
- Deployers - ‘Deployers’ are, in essence, users of AI systems. Deployers are subject to the AI Act if: (a) they are located or established in the EU, or (b) the output of the AI system is used in the EU.
- Importers - ‘Importers’ are organisations that are located or established in the EU that offer in the EU AI systems under the brand of a non-EU organisation.
- Distributors - ‘Distributors’ are anyone in the supply chain that makes an AI system available on the EU market (that is not a provider or an importer).
What AI systems does the AI Act regulate?
The definition of ‘AI system’ in the AI Act aligns with the OECD’s internationally recognised definition of AI. Key aspects of the definition of ‘AI system’ are that the system must operate with some degree of autonomy and that it infers from the input received how to generate outputs.
The AI Act includes general exemptions from the requirements of the AI Act for (amongst other things):
- R&D - AI systems intended solely for scientific R&D are excluded.
- AI development/testing - R&D and testing activities related to AI systems/models outside real-world conditions and before they are marketed are out of scope.
- Military, defence and national security – AI systems used exclusively for these purposes are generally excluded.
In addition, free and open-source licenses are exempt from the AI Act’s obligations unless they fall into prohibited or high-risk categories.
Risk-Based Framework
The specific obligations in relation to AI systems vary depending on the type of AI system and, in particular, the purposes for which it is intended to be used. Many AI systems do not fit into any of the categories that are subject to specific requirements under the AI Act – such systems will be largely unregulated by the AI Act but providers and deployers will need to consider the impact of existing laws, such as the GDPR. In addition, providers and deployers of all AI systems are subject to a general obligation to ensure a sufficient level of AI literacy amongst members of their workforce that interact with AI.
The key categories of AI are:
- Prohibited AI - systems that are considered to pose unacceptable risks to individuals’ fundamental rights1 and are banned.
- High-Risk AI - systems that are used for specified purposes that have the potential to create significant risks (for example, in the context of recruitment) are subject to prescriptive compliance requirements.
- Chatbots and Generative AI – the AI Act includes a relatively small number of transparency obligations targeting particular use cases, such as chatbots and certain uses of generative AI.
- General-Purpose AI - AI models that ‘display significant generality’, are ‘capable of competently performing a wide range of distinct tasks’ and ‘can be integrated into a variety of downstream systems or applications’ are subject to various specific requirements.
If an AI system falls within more than one of the above categories, the requirements of each category will apply. For example, a provider of an AI chatbot used to screen job applicants should comply with the obligations applicable to both ‘high risk’ systems and to chatbots.
1. Prohibited AI
A limited set of practices that are considered particularly harmful are prohibited entirely, including:
- Emotion recognition in the workplace/education - AI systems intended to be used to detect the emotional state of individuals in situations related to the workplace and education.
- Untargeted image scraping for facial recognition databases - AI systems that create or expand facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
- Subliminal techniques and manipulation causing significant harm - AI systems that deploy subliminal techniques or purposefully manipulative or deceptive techniques to distort a person’s or group’s behaviour leading the AI system to cause (or be likely to cause) that person, another person or group significant harm.
- Social scoring systems that evaluate individuals based on their social behaviour or assumed characteristics and lead to detrimental or unfavourable treatment of people in a different context or that is unjustified or disproportionate.
- Biometric categorisation systems that are based on individuals’ biometric data to deduce or infer a person’s race, political opinions, trade union membership, religious beliefs, sex life or sexual orientation.
- Predictive policing – profiling individuals to predict the likelihood that they will commit crime.
- ‘Real time’ identification for law enforcement – using ‘real-time’ remote identification systems in public spaces for law enforcement (except in specified circumstances).
2. High-Risk AI
Certain AI systems are considered to pose significant risks to the health and safety or fundamental rights of individuals and are categorised as ‘high-risk’. Providers of ‘high-risk’ AI systems are subject to prescriptive compliance obligations, while importers, distributors and deployers have their own more limited obligations.
What is ‘high-risk’ AI?
Certain AI systems in the following fields may be ‘high-risk’ unless the AI system in fact does not pose a significant risk to the health, safety or fundamental rights of individuals:
- Biometrics – AI systems used for emotion recognition, certain biometric identification systems and AI systems used for biometric categorisation based on sensitive attributes.
- Critical infrastructure – AI systems used as a safety component in the management and operation of critical infrastructure, such as water supply, electricity supply or road traffic.
- Education and vocational training – AI systems used in admissions to education/training institutions, assessments/evaluations, assessing access to education/training or detecting cheating in tests.
- Employment – AI systems for recruitment, evaluations or decisions relating to work-allocation, promotion or termination.
- Public services – AI systems used to assess access to state benefits or healthcare or classification of emergency calls.
- Credit / Insurance – AI systems for credit rating or the risk assessment/pricing for life and health insurance.
- Law enforcement, migration and border control – AI systems with various specific functions in the fields of law enforcement, migration and border control.
- Administration of justice – AI systems used to assist courts and tribunals to determine cases.
- Democratic processes – AI systems used to influence the outcome of elections.
In addition, an AI system will be ‘high-risk’ if it is intended to be used as a safety component of a product (or is itself a product) that is subject to specified EU product safety legislation (such as regulations governing vehicles, machinery and toys).
-
The obligations in connection with high-risk AI systems are numerous and prescriptive. These obligations primarily fall on the providers of the AI system. Providers of high-risk AI systems will need to:
- establish risk management systems;
- implement appropriate data governance and management practices;
- maintain certain technical documentation;
- technically provide for automatic logs over the lifetime of the AI system;
- maintain sufficient levels of transparency;
- ensure human oversight measures are in place;
- achieve appropriate levels of accuracy, robustness, and cybersecurity performance;
- affix a CE marking to indicate conformity with the AI Act;
- register on a database of high-risk AI systems;
- carry out ongoing monitoring of their AI system’s compliance; and
- report serious incidents to regulators within prescribed timeframes.
Providers of high-risk AI systems must make a ‘declaration of conformity’ that their AI system complies with the requirements of the AI Act (as well as the requirements of the GDPR where personal data is processed). In general, providers can make a declaration of conformity on the basis of an internal self-assessment of their compliance. However, for certain AI systems in the field of biometrics, compliance must be assessed by an independent certified body.
If a provider of a high-risk AI system is established outside the EU, the provider must appoint an authorised representative in the EU.
-
Although the majority of obligations in relation to high-risk AI fall on the provider of the relevant AI system, importers and distributors of the AI system also have obligations. Importers and distributors are obliged to conduct diligence on the compliance of the AI system with the requirements of the AI Act and not put the AI system on the market in the EU if they have reason to consider that the AI system does not comply.
Significantly, an importer or distributor can become directly responsible for the AI system’s compliance (typically the responsibility of the provider) if the importer/distributor puts their own brand on the high-risk AI system or makes substantial changes to the AI system.
-
Deployers of high-risk AI systems also have obligations relating to their use of high-risk AI systems. Deployers must:
- use the AI system in accordance with the provider’s instructions for use;
- assign human oversight to competent individuals;
- ensure that input data that the deployer supplies is relevant and sufficiently representative;
- monitor the operation of the AI system and report (a) risks to health and safety and fundamental rights ‘beyond that considered reasonably acceptable’, and (b) ‘serious incidents’ without undue delay to the provider and regulators;
- keep logs automatically generated by the AI system;
- comply with certain transparency obligations where high-risk AI systems are deployed in the workplace or where high-risk AI systems are used to make decisions about individuals; and
- on request from an affected individual, provide a reasoned explanation of decisions made using AI that have a significant effect on them.
Deployers that use high-risk AI systems to provide credit checks, quote for life insurance or provide public services, as well as public bodies, must carry out an assessment of the impact on individuals’ fundamental rights.
Deployers that use high-risk AI systems for emotion recognition or biometric categorisation must inform individuals who are subject to the system (as well as comply with the GDPR).
Importantly for financial institutions, where a financial institution is subject to internal governance obligations under EU financial services law, they will be deemed to be in compliance with certain of the AI Act requirements if they comply with governance rules in the relevant financial services law.
As with importers/distributors, deployers are required to assume the obligations of a provider in relation to a high-risk AI system if they put their own brand on the high-risk AI system or make substantial changes to the AI system.
3. Chatbots and Generative AI
For AI that is not prohibited, high-risk or a general-purpose model, the AI Act’s provisions are relatively limited and focused around transparency.
- Providers of AI systems that interact directly with individuals (such as chatbots) must ensure that it is reasonably clear to individuals that they are interacting with AI.
- Providers of systems that create AI-generated content must ensure that the content is marked in a machine-readable manner to indicate that it is AI-generated.
- Deployers of AI systems that generate ‘deep fakes’ must disclose that the content is artificially generated or manipulated.
- Deployers of AI systems that generate or manipulate text for informing the public on matters of public interest (e.g. current affairs journalism) must disclose that the content is artificially generated or manipulated (unless there is sufficient human review/control).
4. General-Purpose AI Models
General-purpose AI models are characterised by their ‘significant generality’, ability to perform a wide range of distinct tasks and the possibility to integrate them into a variety of downstream systems and applications.
Providers of general-purpose AI models must:
- keep technical documentation of the AI model up to date, detailing the training and testing processes and evaluation outcomes;
- provide and keep up-to-date information and documentation for AI system providers planning to incorporate general-purpose AI models into their systems;
- implement a policy to comply with EU copyright law;
- publish a comprehensive summary of the training data used for the general-purpose AI model, following a template provided by the European AI Office (“AI Office”); and
- appoint an authorised representative in the EU (if the provider is established outside the EU).
Particularly powerful general-purpose AI models that create ‘systemic risk’ face further obligations. Providers of such models are required to:
- implement an adequate level of cybersecurity protection for the general-purpose AI model and the physical infrastructure of the model;
- perform model evaluations in accordance with standardised protocols and tools to identify and mitigate systemic risk, and continuously assess and mitigate such risk;
- assess and mitigate potential systemic risks associated with the development, placing on the market, or use of the general-purpose AI model; and
- document and report any ‘serious incidents’ to the appropriate authorities.
Penalties and Enforcement
Under the AI Act, maximum penalties vary based on the obligation breached. Overall, the maximum penalty for non-compliance is the greater of €35 million or up to 7% of a group’s total worldwide annual turnover for the preceding financial year, but for many obligations the maximum fines are the higher of €15 million or 3% of total worldwide annual turnover.
Individual EU member states are generally required to designate their AI regulators. It remains to be seen whether EU member states will create new regulators focused on AI or bring enforcement of the AI Act into the remit of existing authorities, such as data protection regulators. The AI Act envisages that enforcement may be divided amongst different regulatory bodies. For example, for financial institutions, the AI Act envisages that enforcement of the AI Act may fall within the remit of existing financial services regulators. The European Commission will have exclusive enforcement powers in relation to general-purpose AI.
Implementation Timeline
Once the AI Act becomes law, there will be a transition period before it is entirely in force, with the majority of obligations for most businesses taking effect after two years. The rollout of the AI Act’s various provisions will be phased in over time:
- 6 months: bans concerning prohibited AI systems will become applicable.
- 9 months: AI Office codes of practice should become available.
- 12 months: requirements concerning general-purpose AI will become applicable.
- 24 months: the majority of rules in the AI Act will take effect.
- 36 months: obligations relating to AI systems that are ‘high-risk’ because they are subject to specified EU product safety legislation (such as regulations governing vehicles, machinery and toys) will become applicable.
Businesses wanting to demonstrate their preparedness can join the AI Pact (to be administered by the AI Office), a voluntary initiative for industry leaders to proactively adopt the provisions of the AI Act before they become legally binding.
Preparatory Next Steps
The AI Act will introduce a new regulatory aspect of AI governance that will sit alongside existing legal frameworks that have a significant impact on AI, such as data privacy laws, intellectual property laws and anti-discrimination laws. Businesses should be evaluating the legal risks associated with AI across the existing and new legal frameworks as use of AI increases and regulators across industries are considering how to govern AI. Compliance with the AI Act is likely to form a significant part of AI governance where AI is used for ‘high-risk’ purposes, but for many AI uses the AI Act itself may not impose a significant regulatory burden and other laws, such as the GDPR, might be more pertinent.
Contrary to the approach in many areas, where UK legislation remains aligned with that of the EU, the current UK government has opted against enacting any comprehensive, AI-specific legislation. Instead, the UK has been developing a non-binding, principles-based framework for regulators to apply within their existing fields and the current government is only considering limited and targeted legislative intervention. However, with a General Election likely to take place by the end of 2024, an incoming new government may well take a more forceful approach to regulating AI in the UK.
The enactment of the AI Act is both the end of a lengthy legislative process (the European Commission first published its proposal for the legislation in early 2021 – see our OnPoint at that time) and the start of a new process. The AI Act envisages an array of further documentation to supplement the legal text of the legislation from a range of panels and bodies established under the AI Act, such as guidance, implementing acts, harmonised standards and codes of conduct from the AI Office, European Artificial Intelligence Board and various other bodies and panels. Businesses should be developing their AI governance and a roadmap for compliance with the particular requirements of the AI Act that apply to them, but should also be flexible and adapt their strategies as new material is released.
Footnotes
1. The EU Charter of Fundamental Rights enshrines key fundamental rights, including democracy, non-discrimination, the protection of personal data, the rule of law and environmental protection.
2. ‘Serious incidents’ are incidents involving an AI system that lead to (a) death or serious harm to health, (b) serious and irreversible disruption to critical infrastructure, (c) infringement of EU law obligations designed to protect fundamental rights, or (d) serious harm to property or the environment.