EU Reaches Historical Agreement on AI Act

On 8 December 2023 — following three days of lengthy and intensive negotiations — EU legislators reached political agreement on the world’s first stand-alone law regulating AI: the EU’s AI Act. The EU considers the AI Act as one of its key pieces of legislation and fundamental to ensuring the EU becomes the world’s leading digital economy. The EU aims for the AI Act to have the same ‘Brussels effect’ as the GDPR — in other words, to have a significant impact on global markets and practices.

Whilst the text of the political agreement reached on the AI Act has not yet been made public, we set out below 6 key takeaways based on communications from the Council, EU Parliament, and EU Commission:

1. Who and What Does the AI Act Apply to?

The AI Act applies to providers, manufacturers, importers, distributors, and deployers of AI systems. Whilst the most onerous obligations under the AI Act are imposed on those directly involved in the commercial AI system life cycle — i.e., providers, manufacturers, importers, and distributors — the AI Act also imposes obligations on AI system users (deployers). Obligations imposed on providers, manufacturers, importers, and distributors for high-risk AI are similar to those under EU product safety laws — see details below. Obligations imposed on users include the implementation of human oversight, and measures in relation to data governance, and transparency.

The AI Act will apply where the AI system or its output has touchpoints with the EU — meaning that it applies to those who sell, import, distribute, and deploy the AI system in the EU, or where the output is intended to be used in the EU, even where those companies are based outside the EU.

The definition of an ‘AI system’ was heavily debated throughout the legislative process — but is now confirmed to align with the more future-proof definition of AI systems adopted by the OECD. The definition currently is essentially a machine-based system acting with a certain level of autonomy that generates output (e.g., decisions, recommendations, predictions) on the basis of input, but the AI Act recognizes the need for regular review of the text of the AI Act as technology evolves. This in particular applies to the definition of AI, for which the EU Commission was granted the power to, at any point in time, submit a proposal to amend the AI Act to reflect the “developments in technology and the state of progress in information society.” Agreeing on a common definition of “AI” on a global scale is fundamental to achieving consistency, but it remains to be seen whether other legal frameworks outside the EU will adopt aligning or diverging definitions of AI. The AI Act does not apply to areas outside the scope of EU law, such as where AI is used for national security or military/defence purposes.

2. Which Industries and Products Are Impacted?

The AI Act is a horizontal piece of legislation, meaning that it applies to all sectors and industries and thus differs from the approach taken in other jurisdictions, such as the UK, which intends to adopt an industry-specific approach to AI regulation.

Importantly, the AI Act introduces a 4-tiered risk-based approach to the regulation of AI.

  • Unacceptable Risk – AI systems that are considered a clear threat to the safety, livelihood, and rights of people and will be prohibited. The latest negotiations on the AI Act further extended the list of prohibited (i.e., unacceptable-risk) AI systems and now include systems used for behavioural manipulation, untargeted scraping of facial images from the internet or CCTV footage, emotion recognition, and social scoring (i.e., evaluating or classifying individuals based on their social behaviour, socio-economic status, or known or predicted personal(ity) characteristics). In addition, following an extensive debate, the use of AI for remote biometric identification in public spaces is permitted for specific limited use cases by law enforcement — as opposed to a total ban which was being advocated for by some.
  • High Risk – This category includes the use of AI in relation to certain products, for example, machinery (as defined under the EU Machinery Regulation), radio equipment, medical devices, and in vitro-diagnostic medical devices, as well as AI used in certain products in civil aviation (security), and automotive industries. In addition, specific-use cases have also been called out as “high risk” irrespective of which industry/product the use case is deployed in, for example, the use of AI in biometric identification systems, critical infrastructure, credit worthiness evaluation, HR contexts, and law enforcement. Under the AI Act, high-risk AI must comply with various requirements such as conformity assessments, post-market surveillance, data governance and quality measures, mandatory registration, incident reporting, fundamental rights impact assessments, etc. AI systems on the high-risk list, that perform narrow procedural tasks, improve the result of previous human activities, do not influence human decisions or do purely preparatory tasks are not considered high-risk. However, an AI system profiling individuals shall always be considered high-risk.  In addition, high-risk AI systems deployed by public authorities or entities acting on their behalf will have to be registered in a public EU database, unless used for law enforcement and migration.
  • Limited Risk – AI systems that neither qualify as high or unacceptable risk under the AI Act, but which do interact with individuals, are subject to limited transparency obligations (e.g., chatbots that interact with individuals).
  • Minimal or No Risk – AI systems that do not fall within one of the three risk categories above are considered minimal or no risk and, in turn, are not regulated under the AI Act (e.g., email spam filters), however they could still be regulated by other legal frameworks, e.g., the GDPR.

3. How Are Foundation, Generative, and General-Purpose AI Models Regulated?

Foundation AI models, as well as a subcategory of foundation AI models, Generative AI models, are increasingly deployed and developed across all sectors and are subject to a specific set of rules under the Act similar to the requirements imposed on high-risk AI set out above.  In the run-up to the final negotiations, significant debate revolved around how foundation AI models should be regulated under the Act — because, the Parliament was strongly in favour of regulating all AI foundation models identically, rather than distinguishing regulatory requirements on the basis of how the foundation AI model was used in particular use cases. EU Member States with strong “home grown” foundation AI companies, such as France and Germany, were particularly against this type of regulation arguing that this would unduly hinder innovation — and were strong supporters of self-regulation of foundation AI.

General-purpose AI (“GPAI”) is defined under the AI Act as “an AI system that can be used in and adapted to a wide range of applications for which it was not intentionally and specifically designed.” GPAI is also subject to a dedicated set of rules designed to ensure transparency along the value chain.

The compromise text appears to not distinguish foundation AI, generative AI, or GPAI regulation based on use cases. But, according to the EU Commission Q&As, “high impact” GPAI models (i.e., those who were trained using a total computing power of more than 10^25 Floating-Point Operations per Second or “FLOPs”) will be subject to more onerous requirements due to the presumption that they carry systemic risk. Providers of GPAI models with systemic risks must assess and mitigate such risks, report serious incidents, conduct state-of-the-art testing and model evaluations, ensure cybersecurity and provide information on the energy consumption of the models. These new requirements are said to be operationalized through codes of practice developed by stakeholders from industry, civil society, the scientific community and the EU Commission. The 10^25-FLOP threshold captures the currently most advanced GPAI models (e.g. GPT-4). The EU AI Office (that is to be set up within the EU Commission) is tasked with assessing whether this threshold should be adjusted moving forward in light of technological advancement.

4. How Will the AI Act be Enforced?

Non-compliance with the AI Act may result in: (i) regulatory fines by national EU Member State competent authorities of up to 7% of worldwide annual turnover, (ii) civil action initiated by individuals harmed by an AI system, and/or (iii) individual complaints. The AI Act is primarily enforced at a national EU Member State level which, much like the GDPR, could give rise to national fragmentation in interpretation and enforcement. However, to address this potential issue, the compromise text does provide that the most advanced AI models (i.e., GPAI and high-impact foundation models) will be subject to the regulatory supervision of a pan-EU body — a dedicated AI Office — that is to be set up within the EU Commission. A European Artificial Intelligence Board will also be established with a representative from each Member State to facilitate effective and harmonized implementation of the AI Act and with technical expertise provided by an advisory forum.

It should be noted that risk related to the development, commercialisation, and use of AI does not solely stem from the new AI Act. In particular, AI can also be regulated on the basis of, for example, the GDPR (which, amongst other things, restricts the use of automated decision-making and profiling) and other new EU digital data laws (such as the EU Digital Services Act). Most recently, the highest EU court (the CJEU) issued a judgment resulting in a de facto ban of automated credit-scoring systems in the EU.

We therefore anticipate that, whilst the AI Act may not be enforceable for another 24 months (see below), enforcement related to AI in the EU will still take place now on the basis of adjacent legal frameworks, such as the GDPR. Furthermore, outside the EU, AI systems may be subject to enforcement and regulatory supervision on the basis of the new laws, standards, and guidance that are developing rapidly — such as from standards, regulation, and agency initiatives flowing from President Biden’s Executive Order on Safe, Secure, and Trustworthy AI.

5. When Will the AI Act Apply?

Following last week’s provisional agreement on the text of the AI Act, work will continue at the technical level to finalise a number of details. Once this is completed, the text will be subject to endorsement by the co-legislators (the EU Parliament and Council) and undergo legal-linguistic revision before formal adoption. Adoption is not expected to take place before Spring 2024. The AI Act foresees a transition period of 24 months for most requirements, meaning that enforcement can only take place once this transition period has expired. In addition, it is understood that the final text will also introduce shorter transitional periods of 12 months for requirements related to GPAI and 6 months for requirements related to unacceptable-risk (i.e., prohibited) AI.

6. What Should Companies Do?

Many companies today deploy some form of AI that ultimately could be subject to the AI Act — whether internally or in customer-facing products and services. The AI Act has a broad scope of application across all sectors and industries both to companies in and outside the EU. Now that the AI Act has been agreed at a political level, businesses should start assessing whether they may be subject to the AI Act, and consider the AI Act’s requirements on their products and services. Further, the EU Commission encourages industry and other representative organizations to adopt voluntary codes of conduct establishing how a certain industry complies with the AI Act.

As indicated above, the EU may have been the first to agree on a stand-alone law regulating AI, but it certainly is not alone — there are many laws, standards, and frameworks being developed rapidly to regulate AI globally, in addition to other legal frameworks that exist today and may regulate AI (e.g., GDPR). As such, companies developing, distributing, and using AI systems should consider developing an AI governance program and assess the impact and risk of AI and these new AI legal frameworks on their business. In such assessment, and when weighing risk, companies should also factor in the opportunities AI can bring to their business and internal organisation — as the use of AI often leads to many benefits e.g., from an efficiency perspective — as well as customer trust in the company’s use and development of AI.

This post is as of the posting date stated above. Sidley Austin LLP assumes no duty to update this post or post about any subsequent developments having a bearing on this post.