EU Council Publishes Changes to Artificial Intelligence Act Proposal
On 29 November 2021, the Slovenian Presidency (the “Presidency”) of the European Council published its compromise text (“Compromise Text”) on the European Union’s (“EU”) draft Artificial Intelligence Act (“AI Act” or “Act”) alongside a progress report on the Act. While the overall structure of the AI Act and many of its key provisions (including, those relating to potential fines for non-compliance), remain the same, there are some significant proposed changes to the Act which we have noted below including, for example, a new Article on general purpose AI systems.
Overall, notwithstanding the importance of considering the Act’s provisions in granular terms, we suggest that businesses should view this latest development as an important one in highlighting an increasingly dynamic picture when it comes to AI regulation across the globe. Of particular note is the United Kingdom (“UK”) and its Central Digital and Data Office which recently released its new Algorithmic Transparency Standard, and the United States Food and Drug Administration (“FDA”) which published its principles on “Good Machine Learning Practice for Medical Device Development” as well. The UK Centre for Data Ethics and Innovation has also published a roadmap to an effective AI assurance ecosystem.
- Overview of the AI Act
The draft AI Act was first published by the European Commission on 21 April 2021, and was the first regulation of its kind centralising Member State obligations when designing and deploying AI. The Act is part of a broader package of legislation on AI, with the ultimate goal being to strengthen Europe’s potential to compete in AI at a global level. The Compromise Text is similar in structure and substance to what was first seen in April 2021. The Act still takes a risk-based approach categorising all AI into: (1) unacceptable risk – activities which are prohibited under the Act such as those relating to social scoring; (2) high-risk activities e.g. those relating to medical devices and consumer creditworthiness; and (3) low-risk activities like chatbots. As before, the relevant legal obligations imposed by the Act reduce as the perceived risk level posed by the AI system reduces. Activities not mentioned in the Act are deemed to be of “minimal risk” and are not regulated.
The Compromise Text also retains the wide scope of application with the Act, among other things, applying where the output of the system is in the EU, even if an organisation has no commercial presence within the EU. This continues to be a controversial point for organisations who will not always know where the output of their AI technologies will be used.
As stated above, the Compromise Text preserves the proposed fines for non-compliance with the AI Act of €30 million or up to 6% of worldwide turnover – whichever is higher.
Subject matter & Scope – Key Proposed Changes in the Compromise Text
Aim of the AI Act: The Compromise Text provides that the Act is to have “measures in support of innovation” including through the use of AI regulatory sandboxes. No doubt this is an effort by the EU to show that regulation is not the enemy of innovation – but that the two can be complementary to one another. This is a stance the UK Information Commissioner’s Office (“ICO”) has long taken and the ICO has used sandboxes themselves to test new technologies for regulatory compliance for some years with success.
Scope of AI Act: The Compromise Text defines which areas fall outside the parameters of the Act and explicitly excludes, among other things, AI systems used solely for the purpose of scientific research. This is a significant line to draw, and seemingly in line with the ICO’s pro-scientific research stance recently taken in its National AI Strategy.
The Compromise Text also confirms that general purpose AI systems – understood as AI systems that are able to perform generally applicable functions such as image or speech recognition, audio or video generation, pattern detection, question answering, and translation – should not be considered as falling within scope of the AI Act.
Definitions: The definition of “AI Systems” has been amended in an attempt to better differentiate AI from other information technology, and the definition of “provider” has been amended to emphasize that the AI Act is intended only to capture the commercial placing of AI products on the market. This does not mean that prohibited practices under the AI Act never apply to mere use of AI (and certain AI practices are prohibited for use and the commercial placement of AI in the marketplace) but rather that those who will face fines are unlikely to be those who contribute to the development of AI in terms of research and product development as they will not be considered providers for the purposes of the Act.
Social scoring: The Compromise Text extends the prohibition on the use of AI Systems for purposes of social scoring so that it applies not only to public authorities but private entities using AI as well. The definition of prohibited use has also been extended to include exploiting a “social or economic situation.” The premise behind social scoring is that an AI system would assign a starting score to every individual, which would increase or decrease depending on certain actions or behaviours. For example, failing to pay a speeding fine would decrease your score whilst cycling to work might increase your score. Your final score would then tie to certain decisions or benefits e.g. a high score might entitle you to special public benefits. The idea has been controversial because of the potential for AI systems to use criteria which is not necessarily relevant or fair in determining what final score you receive e.g. clearly your gender should not affect whether you get a mortgage to buy a house. Interestingly, the Act draws a distinction between social scoring and “lawful evaluation practices of natural persons” – permitting the latter. In turn, in principle, the use of AI for the processing of an individual’s financial information to ascertain their eligibility for insurance policies may be permitted albeit this is an area which according to the Act deserves special consideration and is “high risk” because of the “serious consequences” and the potential for “financial exclusion and discrimination”.
High risk systems: The Compromise Text provides further detail on the various obligations providers of high-risk AI systems must adhere to. There is now a more developed, well-structured set of Annexes which give further guidance on the specific obligations relating to high-risk systems as follows:
Annex III which sets out “high-risk” areas: Annex III outlines eight areas where use of AI will be considered to be “high risk”. These areas include AI systems relating to:
- biometric systems;
- critical infrastructure and protection of the environment;
- education and vocational training;
- employment, workers management and access to self-employment;
- access to and enjoyment of private services and public services and benefits;
- law enforcement;
- migration, asylum and border control management; and
- administration of justice and democratic processes.
Moreover, there are also new instances of when a system would be considered to be “high-risk”. Notably, the EU has now included those AI systems which will affect the protection of the environment as being “high-risk”. Interestingly, the eight areas can only be further defined in future, and areas cannot be wholesale removed, with a review of the list of high-risk systems to take place every two years.
Annex IV which sets out technical documentation required by “high-risk” AI systems: A new Annex IV of the Act also provides further detail on the obligation for high-risk AI system providers to register with the EU and provide “technical documentation” for any high-risk AI systems. Such documentation should include a general description of the AI system and its intended purpose but also requires a detailed assessment of the risks associated with the system in question. It will be especially challenging for AI companies to ensure that they demonstrate an adequate “assessment of human oversight measures” employed and to show that they have sufficient methods of “monitoring, functioning and control of the AI system” including a means of considering the “degrees of accuracy for specific persons or groups of persons on which the system is intended to be used”.
Outstanding issues including timing of when the Act may come into force
While the Compromise Text provides some clarity, certain issues remain unresolved. For one thing, much of the Act is abstract, especially in relation to regulating high-risk AI systems, and practical guidance will need to be produced. Moreover, the definitions will undoubtedly continue to be debated as many EU countries are reportedly concerned about whether the appropriate parties along the value chain of AI systems will be caught by the Act. We also await much needed guidance on how this legislation interplays with other EU legislation on data protection, law enforcement and product liability. Finally, questions remain as to when the Act will be finally adopted and actually apply to organisations. The GDPR was proposed in 2012 and only finally came into force in 2018 so perhaps that provides some indication.
- Other global developments relating to AI
Despite this uncertainty, the progress of the AI Act remains significant in particular, against the backdrop of the global regulatory developments in this area. For example, the UK has become one of the first countries to define AI in its National Security and Investment Act 2021, it has published a national AI strategy and most recently on 29 November 2021, it launched an algorithmic transparency standard. Similarly, the FDA, in collaboration with Health Canada and the UK’s Medicines and Healthcare products Regulatory Agency (“MHRA”), released, on 27 October 2021, guiding principles on Good Machine Learning Practice for Medical Device Development. The principles included leveraging multidisciplinary expertise throughout the total product life cycle and implementing good software engineering and security practices. Therefore, looking at the bigger picture we can say that AI regulations are most definitely here to stay. Businesses should consider how they will comply with any new guidance and legislation. Further, the EU and other national regulators are now more consistently emphasising that their legislation seeks to promote innovation in AI – thus, this should also be an exciting time for businesses to consider how they can unlock value by utilising AI in a lawful way.
This post is as of the posting date stated above. Sidley Austin LLP assumes no duty to update this post or post about any subsequent developments having a bearing on this post.