Latest Developments on AI in the EU: the Saga Continues

EU AI Act

Up until recently, political agreement on the final text of the EU Artificial Intelligence Regulation (AI Act) was expected on 6 December 2023. However, latest developments indicated roadblocks in the negotiations due to three key discussion points – please see our previous blog post here. EU officials are reported to be meeting twice this week to discuss a compromise mandate on EU governments’ position on the text, in preparation of the political meeting on 6 December.

Since then, the following key developments on the EU AI Act’s text have been reported:

1. Remote Biometric Identification in the AI Act

EU legislators are reported to have proposed a change regarding the use of real-time remote biometric identification systems in public spaces which seemingly brings the final text closer to the text agreed upon by the Council in December 2022. Initially, the draft AI Act exempted law enforcement from the prohibition on using real-time remote biometric identification systems in public spaces under certain conditions, but the EU Parliament imposed a blanket ban in its text of June 2023 to the use of such technologies, noting that there should be no exemptions, even for law enforcement, as this could give rise to mass surveillance which would significantly impact the fundamental right to privacy. EU legislators are now reported to be in the midst of negotiating a provision that would again allow the use of the technologies by law enforcement but limited to specific narrow use cases of particular high risk such as where such technology is deemed strictly necessary for “the targeted search for specific victims of a serious crime,” which would include terrorism, human trafficking and child sexual exploitation. Any such use would also have to be subject to prior judicial authorization and the competent national authority would have to be notified of each use of the technology. In exchange for toning down this ban, co-rapporteurs are reported to be suggesting an inclusion of all other remote biometric identification in the high-risk category under the AI Act (including where used in private spaces).

2. National Security Exemption

In light of the upcoming Olympics in 2024, France is reported to be pushing for a broad(er) national security exemption to the use of AI. Previous iterations of the AI Act’s text already provided that AI used or commercialized for purposes of military, defence and national security are not in scope of the AI Act – in line with the common principle of EU law that these areas fall outside the scope of EU law and are competencies of the EU Member States. It remains to be seen whether a broader exemption will be included in light of France’s concerns to use AI for security purposes in the context of the Olympics hosted in Paris in 2024.

3. Foundation Models – Two-Tiered Approach

One of the most challenging issues to achieve political agreement on the draft AI Act is how to regulate foundation AI models (which includes generative AI). The European Commission is reported to have circulated a new proposal adopting a two-tiered approach by differentiating upstream from downstream AI models. They envisage to distinguish foundation AI models (upstream models) from general-purpose AI (GPAI) systems (downstream models) which are built on top of these models such as AI apps which are able to create new content; with the upstream models mainly being subject to only transparency requirements and the downstream models being subject to additional obligations.

During a meeting on 24 November 2023, France, Germany and Italy are reported to have opposed this approach, arguing that imposing any type of regulation on foundation AI models runs counter to the AI Act’s purpose which is to regulate specific AI use cases as opposed to specific AI technology. Instead, developers of foundation models should have to define “model cards” to understand the functioning and capabilities and limitations of machine learning models based on best practice. Furthermore, an AI Governance Body should be established to help develop guidelines and to verify the application of the model cards. These three Member States are reported to have reached a trilateral agreement on mandatory self-regulation of foundation AI through codes of conduct. Due to the nascent foundation AI industries in France, Italy and Germany the countries are concerned of the risks of over-regulating foundation AI models which could lead to harming their ‘home-grown’ AI industries. In addition, the three Member States support the position that initially no sanctions should be imposed for violations of the Code of Conduct – which is a key differentiator compared to the (potentially high) sanctions imposed by the AI Act.

Furthermore, tech companies have also opposed the regulation of foundation AI models in the AI Act due to their concerns of potential harm resulting from “over-regulating”.

4. AI Governance – AI Office and AI Board

Legislators are also discussing AI governance with the establishment of a European AI Office tasked with overseeing enforcement of GPAI models and conduct evaluations of GPAI models to assess compliance but would still require EU Member States to facilitate their tasks.

Furthermore, an AI Board with one representative per EU Member State would be established with as its main task ensuring a consistent application of the draft AI Act throughout the EU and advising on secondary legislation, Codes of Conduct, and technical standards. Certain observers would participate in the AI Board such as the European Data Protection Supervisor, the Fundamental Rights Agency and the EU Cybersecurity Agency ENISA.

This post is as of the posting date stated above. Sidley Austin LLP assumes no duty to update this post or post about any subsequent developments having a bearing on this post.