UK Sets Out It’s “Pro-Innovation” Approach To AI Regulation
On 29 March 2023, the UK’s Department for Science Innovation and Technology (“DSIT”) published its long awaited White Paper on its “pro-innovation approach to AI regulation” (the “White Paper”), along with a corresponding impact assessment. The White Paper builds on the “proportionate, light touch and forward-looking” approach to AI regulation set out in the policy paper published in July 2022. Importantly, the UK has decided to take a different approach to regulating AI compared to the EU, opting for a decentralised sector-specific approach, with no new legislation expected at this time. Instead, the UK will regulate AI primarily through sector-specific, principles based guidance and existing laws, with an emphasis on an agile and innovation-friendly approach. This is in significant contrast to the EU’s proposed AI Act which is a standalone piece of horizontal legislation regulating all AI systems, irrespective of industry.
On 11 April 2023, the UK’s Information Commissioner’s Office (“ICO”) responded to the White Paper. Broadly speaking, the ICO welcomed the DSIT’s approach, but also made certain recommendations and requests for clarification.
We set out below the key takeaways from the White Paper (including, comments made by the UK’s ICO), how the UK’s form of AI regulation may interplay with and compare to the EU’s approach, and what to expect next.
Key Takeaways
- Scope: The term “AI systems” is not generally defined in the White Paper (unlike in the EU AI Act) and instead two main characteristics of AI systems have been identified, which generate the need for a bespoke regulatory response:
- Adaptivity: AI systems which are trained to infer patterns that are not easily discernible to or envisioned by humans; and
- Autonomy: AI systems can make decisions without the express intent or ongoing control of a human.
It is unclear at this time whether the lack of a general definition of an AI system will create legal uncertainty – and this was not something the ICO specifically commented on. The DSIT suggests that their approach recognises that there is no consensus on how to define the term “AI system” and allows for a more flexible understanding that means regulations and guidance can move with technological developments. Importantly, the above UK explanation of AI systems does not adopt the risk-based approach the EU has chosen with the EU AI Act.
- Extra-Territorial Application: The White Paper makes clear that the framework will apply to the whole of the UK. Given that no new legislation is being introduced, it is less clear whether the framework will also apply to businesses outside of the UK. The White Paper states that the DSIT will rely on interactions with existing legislation on reserved matters, such as the UK’s Data Protection Act 2018 and the Equality Act 2010, to implement its framework, and it does not intend to “alter” the territorial application of this existing legislation. Questions therefore remain regarding what this means for laws like the UK GDPR, which do have extra-territorial impact on companies outside of the UK.
The White Paper makes clear that regardless of the territorial scope of AI regulation, DSIT intends to be outward looking and to keep collaborating with international players on AI. In particular, the White Paper sets out multilateral and bilateral engagement efforts on AI, including through the OECD AI Governance Working Party and Council of Europe Committee on AI.
- Principles-Based Approach: The White Paper aims to achieve consistency with its sector-specific approach to regulating AI, by establishing an overarching set of 5 principles (the “Principles”), that existing regulators can use across industries, but within their specific area of expertise. There is some acknowledgment that this “initial” approach may need to be revised, with the DSIT anticipating that it will ultimately introduce a statutory duty on regulators requiring them to have due regard for the Principles. The 5 Principles (which are broadly aligned with those in the EU’s AI Act) are:
- Safety, security and robustness: certain technical standards and good practices should be incorporated into guidance, e.g. UK the National Cyber Security Centre’s Principles for the security of machine learning.
- Appropriate transparency and explainability: AI systems should be transparent and explainable e.g., systems should include detailed and clear instructions of use.
- Fairness: This principle will not go beyond “legally required fairness” i.e., AI systems must be developed in a manner that complies with existing laws and should not lead to bias or discrimination.
- Accountability and governance: effective and appropriate oversight must be demonstrated at every stage of the AI system’s lifecycle.
- Contestability and redress: the framework will not create new rights or new routes for redress for individuals harmed by AI, and instead existing regulators will be expected to clarify existing routes for contestability and redress. This diverges from the EU, which has introduced a proposed adaptation of civil liability rules to AI systems, in the draft AI liability directive.
The ICO provides detailed comments on the Principles, noting the parallels between the AI Principles and those under the GDPR. In particular, the ICO called for clarity around the Principle of contestability and redress, noting that it is typically organisations using AI (and not regulators) which have oversight of their own systems and should be expected to clarify routes to contestability.
- Sector-Focused Guidance Expected In The Next 6-12 Months: The Principles will be turned into sector-focused guidance by regulators. Indeed, the speed at which the UK Medicines and Healthcare products Regulatory Agency (“MHRA”) published its Guidance on Software and AI as a Medical Device shortly after the publication of the White Paper is potentially indicative of the agility and flexibility with which regulators will be empowered to adopt positions on appropriate regulation. The White Paper emphasises collaboration between UK regulators although, the ICO commented that further guidance will be needed about how this collaboration will work in practice. It will be interesting to see how this new guidance interplays with the existing guidance available – including the ICO’s guidance on AI and Data Protection which was refreshed only last month.
- Generative and General Purpose AI Systems: The White Paper acknowledges the debate around the use of generative and general purpose AI systems, but unfortunately provides little detail on how to regulate them. The ICO recently commented on generative AI systems with a blog post on “Questions generative AI developers and users need to ask”.
- Regulatory Sandboxes: The DSIT confirms that it will offer £2 million worth of funding to regulatory AI sandboxes to promote new AI systems being developed. In turn, the ICO dedicated several paragraphs of its response, to the DSIT’s proposal to introduce a joint regulatory sandbox, which could bring together cross-sectoral regulatory advice. Utilising the ICO’s own experience of running a data protection sandbox, the regulator noted that: (i) the scope of the sandbox should go beyond AI systems to “digital innovation” more generally; (ii) the sandbox should align with AI development lifecycles; and (iii) support for businesses in utilising the sandbox should vary according to the degree of innovation, regulatory barriers faced and potential for wider societal benefit.
- Sanctions: The White Paper does not set out prescriptive sanctions or monetary fines or other means through which individuals may seek redress, save for elucidating that regulators will be expected to make such paths available. There is also no private action for AI-specific damages, again in contrast to the EU’s proposed AI liability directive.
Diverging Approaches to AI Regulation?
At this stage, it is difficult to assess and compare the divergent approaches of the EU and the UK to regulating AI, but it is notable that there are overlapping principles, as well as a central role given to harmonised technical standards in developing AI products. In addition, the UK’s intended uptake of the same technical standards, alongside the focus on interoperability and risk management, may well mean that the UK remains largely aligned with AI international frameworks.
Timeline – What Next?
The consultation on the White Paper will be open until 21 June 2023. This will be a good opportunity for stakeholders to share their views on the proposed framework. Please see the relevant timings below:
This post is as of the posting date stated above. Sidley Austin LLP assumes no duty to update this post or post about any subsequent developments having a bearing on this post.