EU Moving Closer to an AI Act – Key Areas of Impact for Life Sciences/MedTech Companies
The European Union is moving closer to adopting the first major legislation to horizontally regulate artificial intelligence. Today, the European Parliament (Parliament) reached a provisional agreement on its internal position on the draft Artificial Intelligence Regulation (AI Act). The text will be adopted by Parliament committees in the coming weeks and by the Parliament plenary in June. The plenary adoption will trigger the next legislative step of trilogue negotiations with the European Council to agree on a final text. Once adopted, according to the text, the AI Act will become applicable 24 months after its entry into force (or 36 months according to the Council’s position), which is currently expected in the second half of 2025, at the earliest.
The highly anticipated (and hotly debated) proposal for an AI Act was published by the European Commission (Commission) in April 2021. In December 2022, the Council – made up of the EU Member States’ governments – adopted its negotiation position in its general approach. Whilst Parliament’s position, as understood, would not in itself reveal the form of the final AI Act, it does lift the veil on Parliament’s approach, which seems to depart from the Council’s general approach on several key issues. Namely, Parliament has deviated from the Council’s position on definitions, technical specifications, and scope. It is likely that further changes will emerge as a result of the trilogue negotiations in the coming months.
In this Sidley Update, we examine the key changes and developments arising from the Commission’s April 2021 proposal to the Council’s general approach, and to the position now understood to be agreed upon by Parliament. We focus on aspects that impact life sciences companies:
- Definition of AI: The definition now proposed by Parliament defines AI systems more broadly as “a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations or decisions that influence physical or virtual environments.” The definition is closely aligned with that of the Organization for Economic Cooperation and Development (OECD), with a stated aim to align closely with international harmonisation. Furthermore, Parliament has followed the Council in removing Annex I of the Commission’s proposal, which listed techniques and approaches of software development. These are now listed in new recitals to provide flexibility to create a dynamic concept in relation to machine learning approaches. This effectively provides more certainty, avoiding the unintended inclusion of systems for which AI governance would not be appropriate.
- General Purpose AI: The Council introduced, and Parliament has retained, a new category of AI systems, General Purpose AI (GPAI). GPAI in Parliament’s proposal is “an AI system that can be used in and adapted to a wide range of applications for which it was not intentionally and specifically designed”. This is broader than the Council definition which requires the intention by the provider to perform generally applicable functions such as image and speech recognition, audio and video generation, pattern detection, question answering, translation, and others. According to Parliament’s proposal, certain requirements applicable to high-risk AI systems will also apply to GPAIs in certain circumstances. For example, in the event that GPAI providers make a substantial modification to systems which have not hitherto been classified as high-risk and have already been placed on the market or put into service, in such manner that the AI system subsequently becomes a high-risk AI system.
- Generative AI/ Foundation models: New wording in recitals and a new Article 28b also clarifies that generative AI – a subcategory of GPAI that can create content – will now be included within the scope of the AI Act. In addition, a new and key concept in Parliament’s proposal is that of “foundation models,” models trained on broad data which can be implemented to produce AI systems with either a specific intended purpose or general purpose, resulting in the foundation model being reused in “countless” downstream AI systems. Foundation models are understood to be systems capable of developing into generative AI systems and indeed other sub-categories of GPAI systems. Accordingly, there are new extensive ex ante obligations on providers of foundation models, with significant fines for breaches.
- Scope of the AI Act: Parliament proposes that the AI Act applies to systems with output intended to be used in the EU by providers or deployers (unlike the Council’s version, which proposes that it shall apply to any output produced by an AI system used in the EU, regardless of the provider’s intention). For products aimed at markets outside of the EU, in order to stay outside the scope of the AI Act, ex-EU-based life sciences companies should consider declaring the intended territorial scope of their AI systems and products embedding AI systems.
- High-risk AI systems: Parliament has maintained the Council’s definition of high-risk AI systems as AI systems (i) intended for use as safety components of products or that are products themselves covered by other Union legislation; and (ii) that are subject to third-party conformity assessment related to risks for health and safety. In addition, the Parliament proposal classifies systems that pose a significant risk of harm to fundamental rights as high-risk. Based on this definition, many AI systems used in the life sciences sector will be classified as high-risk, such as medical devices, including software or in-vitro diagnostics that are subject to a conformity assessment procedure by a Notified Body. Though it has maintained the definition, Parliament has proposed significant changes to the high-risk classification of AI systems:
- Providers who believe their AI system is not of significant risk of harm to people’s health, safety, or fundamental rights may submit a “reasoned request” to the competent national supervisory authority or to the AI office (the EU body tasked with streamlining enforcement at the EU level) requesting that the AI system be exempted from the high-risk obligations.
- A revised list of high-risk systems is included in Annex III, which could now include generative AI systems.
- Sandboxes: The AI Act introduces regulatory sandboxes, allowing companies to explore and experiment with new and innovative products, services, or businesses under a regulator’s supervision. Parliament proposes that AI providers shall be able to participate in a sandbox, not only for the development of their innovations in a safe and controlled environment but also for clarification if there is uncertainty around determining risk classification, under the AI Act. For companies taking advantage of sandboxes there would be a presumption of compliance, but not an exemption from regulatory obligations or liability.
- Technical standards: Harmonized standards by CEN-CENELEC are not expected before early 2025. Standards will cover aspects such as risk management and training data quality. In the meantime, the Council requests the Commission to issue common specifications for high-risk systems requirements and GPAIs. This request is also supported by Parliament’s position, which proposes an additional layer of consultation with the AI Office and AI Advisory Forum before the issuance of common specifications.
- Providers: AI system distributors, importers, and other third parties will be considered as high-risk AI system providers if they substantially modify an AI system such that it falls into the high-risk classification. The “substantial modification” condition remains to be clarified and applies also to GPAIs in Parliament’s proposal. This will be relevant for life sciences companies that may want to modify GPAIs for their operations (e.g., in the case of a medical application in the form of a disease-agnostic simulation capable of predicting treatment outcomes based on a GPAI system) and who would need to consider whether compliance systems are already in place or need to be established, and obtain any other necessary information from third parties in the supply chain, such as the AI system manufacturer.
Next steps: Parliament’s committees are expected to vote on the agreed text in the coming weeks and the Parliament plenary is expected to adopt the text in June. Following this adoption, the trilogue meetings will begin. Once the trilogue negotiations lead to the adoption of a final text, regulators, the life sciences industry, and the AI community alike will be on the lookout for reliable, safe, and compliant AI solutions. Hence, both medical device and pharmaceutical companies should be well prepared. With guidelines and harmonized standards for AI risk management already emerging, companies are well advised to begin implementing policies and risk management frameworks now.
This post is as of the posting date stated above. Sidley Austin LLP assumes no duty to update this post or post about any subsequent developments having a bearing on this post.