European Commission’s Public Consultation on Proposed EU Artificial Intelligence Regulatory Framework

On 19 February 2020, the European Commission published a white paper on the use of artificial intelligence (“AI”) in the EU (the “White Paper”). The White Paper forms part of the Commission President, Ursula Von der Leyen’s, digital strategy, one of the key pillars of her administration’s five year tenure, recognising that the EU has fallen behind the US and China with respect to the strategic deployment of AI. To tackle this problem, the Commission proposes a common EU approach to ‘speed up the uptake’ of AI in the EU, whilst also tackling the human and ethical implications of AI’s fast growing use in the EU, including the possible downsides of its use, such as opaque decision making and hidden, embedded gender and racial discrimination. In order to achieve a common EU approach to AI, and to create “trustworthy” AI that can rival developments in the US and China, the Commission proposes the creation of a regulatory framework for AI.

Under the regulatory framework, AI applications deemed ‘high-risk’ will be distinguished from ‘non high-risk’ AI applications. High-risk AI applications will be required to comply with additional safeguards and a prior conformity assessment involving testing, inspections and potential checks of algorithms and data sets used in the development stage, prior to the deployment of the high-risk AI application. AI applications deemed non ‘high-risk’ will have the option of complying with the additional high-risk safeguards or complying with a similar set of requirements, specifically established for the purposes of the voluntary scheme.

We set out below key points of the proposed regulatory framework:

  • A definition of AI that is “sufficiently flexible to accommodate technical progress while being precise enough to provide the necessary legal certainty.” The Commission’s starting assumption is that the proposed regulatory framework will apply to “products and services relying on AI” and clarifies, for the purpose of any future policy making discussions, the definition also includes data and algorithms.
  • The regulatory framework would follow a risk-based approach ensuring regulatory intervention is proportional. In turn, the regulatory framework would distinguish between ‘high-risk’ AI applications and non high-risk AI applications.
  • High-risk AI applications would be determined both by the sector the AI application is in, and whether the intended use involves significant risks to data subjects with respect to safety, consumer rights, and fundamental rights.
  • An AI application would be considered high-risk where it meets the following two criteria: (i) it is deployed in a sector where significant risks can occur (the Commission provides a non-exhaustive list of healthcare, transport, energy and parts of the public sector); and (ii) the AI application is used in a manner likely to cause significant risks to individuals (e.g., discrimination).
  • Notably, under the proposed regulatory framework, the use of AI applications for “remote biometric identification” purposes and other surveillance technologies, would always constitute a high-risk AI application.
  • Where an AI application is considered high-risk under the proposed regulatory framework, the economic operator in the AI application supply chain “best placed” to address such risks would be subject to specified mandatory legal requirements, as further described in this post.

Training Data

In relation to the data set used to train AI applications, the following requirements could be imposed: (i) reasonable assurances that the subsequent use of the AI-enabled products meets the standards set in applicable EU safety legislation; (ii) assurances that subsequent use of AI products does not lead to discrimination (e.g., imposing requirements for data sets to be sufficiently representative, ensuring gender, ethnicity and other possible grounds are appropriately reflected in those data sets); and (iii) privacy obligations under the GDPR should be adequately protected during the use of AI-enabled products.

Record Keeping

To tackle the complexity and opacity of certain AI systems (e.g., algorithms), retention requirements would be imposed for: (i) accurate records regarding the data set used to train and test the AI systems, including a description of the main characteristics and how the data set was selected; (ii) in certain circumstances, the data sets themselves; and (iii) documentation on the programming and training methodologies used to build, test and validate the AI applications, including, where applicable, measures to avoid bias and discrimination.

Information Provision

In accordance with the transparency requirements under the GDPR (e.g., informing individuals of the purposes and legal basis for collecting their personal data) the following requirements would also be imposed: (i) providing information to individuals in a clear, easy and accessible manner on the AI application’s capabilities and limitations, including the purpose, conditions in which it functions and the expected level of accuracy in achieving the specified purpose; and (ii) where not immediately apparent, individuals should be informed when interacting with an AI system and not a human being.

Robustness and Accuracy

In order to achieve trustworthy AI applications, high-risk AI systems should: (i) correctly reflect their level of accuracy, during all life cycle phases; (ii) ensure outcomes produced are reproducible; (iii) ensure AI systems can adequately deal with errors or inconsistencies during all life cycle phases; and (iv) ensure resilience against both overt attacks and covert attempts to manipulate data or algorithms.

Human Oversight

In order to achieve trustworthy, ethical and human centric-AI applications, the following non-exhaustive oversight mechanisms could be introduced: (i) the output of the AI system will not become effective unless it has been previously reviewed and validated by a human being (e.g. the denial of entry into a building can only be taken by a human only); (ii) the output of the AI system becomes immediately effective, but human intervention is ensured afterwards (e.g. the rejection of an application for a credit card is carried out by an AI system but immediately subject to human review); and (iii) monitoring of the high-risk AI application while in operation and the ability to intervene in real time and deactivate (e.g. a stop button in a driverless car where a human determines it is not safe to drive).

Non-high risk applications

As mentioned above, where an AI application does not meet the high-risk criteria, under the proposed regulatory framework, it could be subject, on a voluntary basis, to the mandatory legal requirements for high-risk AI applications or a similar set of “voluntary labelling” requirements to be established. Under the scheme, economic operators who opt in to “voluntary labelling” or comply with the high-risk AI application requirements would be awarded a quality label for their AI applications. This could be used to differentiate such applications from their competitors.

Conclusion

The proposed regulatory framework has significant implications for the deployment of AI applications in the EU, affecting manufacturers and providers both inside and outside the EU. In addition, the Commission has also expressed concern for the use of AI for remote biometric identification purposes in public spaces and is launching a debate on the specific circumstances, if any, which might justify such use, and common safeguards which can be adopted.

Given the recent debate amongst the Commission and other EU stakeholders (including data protection authorities) on whether to introduce applications deploying AI in response to COVID-19 and the data protection implications of doing so, organisations involved in all stages of an AI application’s deployment should consider responding to the Commission’s public consultation.

The consultation closes on 14 June 2020.