European Parliament Adopts AI Act Compromise Text Covering Foundation and Generative AI

On 14 June 2023, the European Parliament adopted – by a large majority – its compromise text for the EU’s Artificial Intelligence Act (“AI Act”), paving the way for the three key EU Institutions (the European Council, Commission and Parliament) to start the ‘trilogue negotiations’. This is the last substantive step in the legislative process and it is now expected that the AI Act will be adopted and become law on or around December 2023 / January 2024. The AI Act will be a first-of-its-kind AI legislation with extraterritorial reach.

The European Parliament’s compromise text conceptually takes the same approach as was originally proposed in the European Commission Proposal back in April 2021, meaning that AI will be regulated in the EU on the basis of a risk-based, industry-agnostic and horizontal approach – with the most onerous regulatory obligations (or even a ban) imposed on the AI systems that are deemed to involve an ‘unacceptable’ or ‘high’ degree of risk to fundamental rights and other interests protected under the AI Act. The AI Act will apply directly in all EU Member States and could also apply outside the EU, on the basis of the broad extraterritorial scope which was further extended by the European Parliament.

However, as compared to the previous iteration of the text issued by the EU Council in November 2022, the European Parliament has proposed significant amendments to account for the recent exponential growth in foundation and generative AI models and in turn, the perceived risks associated with such AI models.

Another noteworthy amendment compared to the November 2022 iteration of the AI Act is the increase in the level of potential fines from €30 million or 6% of a company’s annual worldwide turnover for the preceding financial year (whichever is higher), to €40 million or 7% of annual worldwide turnover.

We discuss below the obligations applicable to foundation and generative AI models – the latter a sub-set of the former – under the AI Act as proposed by the European Parliament.

Key Takeaways

  • The European Parliament’s compromise text imposes new requirements on providers of foundation AI models (which include as a sub-set, generative AI models). Such requirements include an obligation to demonstrate ‘compliance-by-design’ throughout the development of the AI to ensure “adequate performance, predictability, interpretability, corrigibility, safety and cybersecurity” as well as a requirement to register the AI model in an EU database. Generative AI models will be subject to additional requirements in relation to user transparency, more extensive testing and, importantly, the documentation and publication of detailed summaries of the training data.
  • Importantly, neither foundation nor generative AI systems will be considered an ‘unacceptable-risk’ or ‘high-risk’ AI system under the AI Act by default – although could fall within either risk category depending on how they are used.
  • The European Parliament’s compromise text – for which adoption is expected by January 2024 – increases the potential fines for non-compliance to 40 million or 7% of annual worldwide turnover.

What Are Foundation and Generative AI Models?

Key definitions under the AI Act have been the subject of much debate throughout the legislative process and in particular, the definition of an “AI system” has evolved over time. As noted in our last blog post, the definition is now more closely aligned with the Organization for Economic Cooperation and Development’s (“OECD”) definition, favouring global co-ordination in relation to AI standards.

Importantly, the European Parliament’s compromise text includes, for the first time, definitions for both “foundation” and “generative” AI models as follows:

  • A “foundation model” means: “an AI system model that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks”; and
  • “Generative AI” means: “foundation models used in AI systems specifically intended to generate, with varying levels of autonomy, content such as complex text, images, audio, or video”.

Foundation (incl. generative) AI models such as Large Language Models (“LLMs”) mainly distinguish themselves from the more traditional AI models because – through processing of large unstructured (unlabelled) data sets and, often, unsupervised (or ‘self-supervised’) machine learning techniques – they can perform a wide range of different tasks. Unlike non-foundation models which typically are trained to perform one specific task, foundation models will take learnings they have gained performing task A to apply it to task B. For the avoidance of confusion, note that “generative” AI models constitute a particular subset of “foundation” models; and, the AI Act would impose additional regulatory requirements on “generative” models.

What are the Key Obligations?

It is key to note that foundation (including generative) AI systems will not be considered an ‘unacceptable-risk’ or ‘high-risk’ AI system under the AI Act by default – and in that regard, the AI Act’s most onerous regulatory obligations (including potentially a ban) do not apply to such systems due to the mere fact of them being foundation and/or generative AI. However, the European Parliament did adopt a dedicated new legal provision – in Article 28b of the AI Act – which sets out the legal requirements to which foundation AI must comply. As indicated above, an additional set of regulatory obligations applies to generative AI under Article 28b.

The regulatory requirements under Article 28b are imposed on providers of foundation AI models. Users of foundation AI models are not in scope of Article 28b, however they may be in scope of the AI Act’s other regulatory obligations – for instance where the foundation AI system qualifies as high-risk under the AI Act (Title III of the AI Act).

  • Who Is Considered a ‘Provider’ of Foundation AI: the AI Act defines a provider as those that develop (or commission the development of) AI systems with the aim of either placing the system on the EU market or putting it into service under their own name or trademark (whether for payment or free of charge). Importantly, in the current text of the AI Act, the ‘provider’ concept has been specifically considered in the context of the scenario where foundation AI is being offered ‘as-a-service’: e.g., Company X grants API access to Company Y for Company Y to develop AI-powered applications. In such a case, both Company X and Y are likely to be considered a ‘provider’ under the AI Act, meaning that in relation to foundation AI models, the concept of a ‘provider’ may apply to a broader set of organizations than originally anticipated.
  • Continuous Risk Assessment, Risk Mitigation and ‘Compliance-by-Design’: providers will need to demonstrate throughout the development of the foundation AI model, that:
    • they have identified, considered and mitigated any risks that the foundation AI model poses to individuals and society – including in relation to health and safety, fundamental rights, the environment, democracy and/or the rule of law; and
    • they have designed and developed the model to ensure “adequate performance, predictability, interpretability, corrigibility, safety and cybersecurity”.

To meet these requirements, Article 28b provides that providers may seek to rely on independent experts who can offer advice on managing AI risk, as well as independently considering and anticipating various risks during the design stage of the AI foundation model (‘compliance-by-design’). In addition, providers should ensure they adequately document any residual risks when they put an AI system into service. Providers should also perform extensive testing of the model at various stages of development and through the lifecycle of the model. Lastly, providers should set up a quality management system to keep track of the foundation AI model’s compliance with Article 28b.

  • Rely on Quality Datasets: providers must take due consideration of the data sets they are feeding into foundation AI models – as foundation AI models are more likely to pick up and act upon e.g. bias. Providers of foundation AI systems can only use data sets that are subject to appropriate data governance measures specific to such models, including, in particular, measures to examine e.g. bias in foundation models.
  • Document and Register AI Foundation Model: providers are required to draw up extensive technical documentation and instructions for use, including to allow downstream AI foundation model providers to comply with the AI Act, and to register foundation models in an EU database.
  • Consider Environmental Impact: due to the extensive computing and processing power that is required to enable foundation AI models, providers of such models should consider the environmental impact of their systems, and make use of relevant standards (which are to be developed under the AI Act) in order to improve energy efficiency.
  • Obligations Applicable to Generative AI: foundation AI models that qualify as ‘generative AI’ under the AI Act, must comply with an additional set of obligations in relation to user transparency, more extensive testing and, importantly, the documentation and publication of detailed summaries of the data that was used to train the generative AI system, where such training data is protected under copyright law.

These obligations apply irrespective of whether the foundation model is a standalone product or embedded in an AI system, and irrespective of how it is offered or distributed including where the foundation AI is offered on a free or open source-basis). Standards will be developed under the AI Act to specify the above requirements further.

Next Steps

Trilogue negotiations have already begun, and further amendments to the AI Act’s text should be monitored closely, especially in relation to generative AI. The EU is targeting closing these discussions by November 2023, with the final adoption of the AI Act expected in either December 2023 or January 2024. Shortly after adoption, the AI Act will be directly applicable in the EU Member States (i.e., without the EU Member States needing to enact/implement the AI Act into national law) – although a grace period of between 24-36 months is expected to apply. As a result of the rapidly developing regulations on AI, including the AI Act and other AI laws and international standards that are developing globally, businesses who develop, provide or use AI should consider developing an AI compliance strategy and program.

This post is as of the posting date stated above. Sidley Austin LLP assumes no duty to update this post or post about any subsequent developments having a bearing on this post.