EU Moving Closer to an AI Act?

On 24 October 2023, the European Parliament and Member States concluded a fourth round of trilogue discussions on the draft Artificial Intelligence Regulation (AI Act). Policymakers agreed on provisions to classify high-risk AI systems and also developed general guidance for the use of “enhanced” foundation models. However, the negotiations did not lead to substantial progress on provisions for prohibitions in relation to the use of AI by law enforcement. The next round of trilogue discussions will take place on 6 December 2023.

EU AI Act: Current State of Play

Up until recently, the expectation was that political agreement would be reached on the final text of the AI Act during the December discussions with a view to finalization and adoption by this year-end. Following a technical negotiation meeting held on November 10, 2023, however, it appeared agreement on the text was no longer evident due to a number of EU Member States – including France and Germany – opposing the text’s current regulation of foundation AI models, which was considered too onerous and would impact on their own ‘home-grown’ foundation AI companies.

However, on 15 November 2023, the AI Act’s co-rapporteur indicated that the AI Act is still on track to reach agreement on December 6, although three key sticking points remain in relation to:

  1. the use of AI systems for biometric identification of individuals in publicly accessible spaces in the context of law enforcement. The current text of the AI Act prohibits all use of real-time remote biometric identification in publicly accessible spaces, whereas the original EU Commission’s proposal of the AI Act text contained an exemption to this prohibition where such identification was deemed necessary for purposes of law enforcement. EU Member State governments seem aligned with the Commission’s position on this point, but the AI Act’s co-rapporteur considers no concessions should be made in this regard due to the high risk of “mass surveillance”. Legislators currently believe this may be politically the most difficult point to overcome;
  2. the regulation of foundation AI models, which is considered to be the most challenging to agree on from a technical perspective. The Act’s co-rapporteur considers it crucial to agree on regulation of these models as these could pose greater risks than traditional AI models due to their rapid pace of development and technical abilities. At the same time, it was indicated that there are many interests at play here and the EU wants to be mindful to not hinder innovation. To this point, one of the proposals is to assess foundation AI models in line with the used (amount of) computing power, training and the economic resources of the provider;
  3. AI governance and enforcement. Currently, the AI Act text still foresees a decentralized enforcement model where the majority of enforcement powers lie with competent EU Member State authorities. However, drawing from experiences with GDPR enforcement, this model is now being revisited for more EU-level enforcement and governance.

If agreement is not reached during the December 6 trilogue meeting, discussions on the text will move into early 2024. Alternatively, according to the AI Act’s co-rapporteur, one of the options is to “delegate” the areas on which agreement is not reached on December 6 to the planned EU AI Office (which, among other things, will have standard-setting responsibilities).

Once adopted, the AI Act would be the first horizontal standalone legislation worldwide to regulate AI systems, introducing rules for the safe and trustworthy placing on the EU market of products with an AI component. The draft AI Act currently defines AI as “a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations or decisions that influence physical or virtual environments.” Although, according to the AI Act’s co-rapporteur, the definition of “AI” in the AI Act is also still under discussion.

For more information on the draft AI Act including the definition of prohibited, high-risk and foundational/generative AI systems and several key provisions, please see our previous blog posts here and here.

EU AI Act May Lead to GDPR Overhaul

The EU General Data Protection Regulation 2016/679 (GDPR) and EU AI Act pursue what some may see as different interests – one is to protect personal data and the other is to regulate AI systems which, necessarily, require data to function. The EU AI Act contains provisions which establish that, where AI systems process personal data, such as in the form of training data, the GDPR continues to apply in relation to such personal data processing. However, there is recognition that AI systems, in particular foundation AI models, require large amounts of data to be processed in order to function. The EU recognizes that the GDPR may be an important roadblock to innovation and development in this area – because more often than not, data sets fed into (foundation) AI models for processing will contain (even a small amount of) personal data. This is the case in part because the GDPR has a high threshold for anonymization and only excludes data from the GDPR’s scope where it is not possible ‘by means reasonably likely to be used’ to re-identify data. Further, where a data set contains both non-personal and personal data that cannot be separated (a so-called ‘mixed data set’), the GDPR is considered to apply to the data set as a whole (i.e., including to the non-personal data).

Specific GDPR requirements that may hinder AI innovation and development include the requirement to establish a legal basis for the processing of personal data for purposes of AI. The Confederation of European Data Protection Organizations (CEDPO) has issued a paper looking into this, and concluded that the most commonly used legal bases – consent, contract performance and legitimate interests – may present difficulties for AI developers/providers to rely on; in particular as most will not have a direct relationship with the individual whose personal data are being processed by the AI system. In order to support innovation in this area in the EU, the EU is now considering an overhaul of the GDPR to address these issues.

Procurement of AI by Public Organizations – AI Model Contract Clauses

In September 2023, the European Commission published AI model clauses for use in the procurement of AI by public organizations (as the user of AI) from external AI developers (as the providers of AI) (AI Model Clauses). The AI Act imposes the most onerous set of obligations on AI system providers (i.e., those who put into service or place the AI system on the EU market) – although there are requirements imposed on high-risk AI users as well. The AI Model Clauses are intended to put in place a contractual framework to ensure the public organization as the AI user can comply with the AI Act.

The AI Model Clauses include two sets of model clauses: (1) a more onerous set of clauses for AI systems classified as ‘high risk’ under the AI Act (e.g., AI systems used in medical devices, the automotive industry, children’s toys or in specific use cases e.g. CV-sorting software for recruitment procedures); and (2) a ‘light version’ for non-high risk AI systems. The model contractual clauses are (currently) drafted for use by ‘public organizations’ when procuring AI systems. However, private organizations may consider drawing inspiration from the AI Model Clauses when developing contracts for their own procurement of AI.

As the proposed AI Act has not yet been adopted, the European Commission highlights that public organizations wishing to use the AI Model Clauses may do so on a voluntary basis on a case-by-case assessment and may customize the clauses to the specific context of the procurement and the organization. It is unclear whether use of the AI Model Clauses will become mandatory following adoption of the AI Act or whether they can continue to be used on a voluntary basis – and whether the scope will be extended to cover private organizations as well.

This post is as of the posting date stated above. Sidley Austin LLP assumes no duty to update this post or post about any subsequent developments having a bearing on this post.