ICO Publishes Its Strategic Approach to Regulating AI

On 30 April 2024, the UK’s Information Commissioner’s Office (“ICO”) published its strategic approach to regulating artificial intelligence (“AI”) (the “Strategy”), following the UK government’s request that key regulators set out their approach to AI regulation and compliance with the UK government’s previous AI White Paper (see our previous blog post here). In its Strategy, the ICO sets out: (i) the opportunities and risks of AI; (ii) the role of data protection law; (iii) its work on AI; (iv) upcoming developments; and (v) its collaboration with other regulators. The publication of the ICO’s Strategy follows the recent publication of the Financial Conduct Authority’s (“FCA”) approach to regulating AI.

  • The opportunities and risks of AI: the Strategy focuses on the “undeniable” potential of AI, citing use cases from medicine to entertainment, and how it can propel society forward in many different ways. However, the Strategy also raises concerns over the following risks from the use of AI: fairness and bias; transparency and explainability; safety and security; and accountability and redress. It also highlights that existing technological risks will likely be exacerbated, while new risks will emerge. Given the development and deployment of AI systems is often rooted in the processing of (personal) data, the ICO confirmed their belief that these activities – throughout the AI supply chain – fall under its remit. The Strategy focuses on some particular use cases: (i) foundation models; (ii) high-risk AI applications; (iii) facial recognition technology and biometrics; and (iv) children and AI (a spotlight issue given recent developments like the Online Safety Act).
  • The role of data protection law: the Strategy explains how the principles set out in the government’s AI Regulation White Paper consultation “mirror to a large extent” the statutory principles which the ICO already oversees as the UK’s data protection authority. The ICO outlines in more detail how the ICO’s principles are aligned with the AI Regulation White Paper principles (such as “safety, security, robustness,” and “fairness”), while noting that the government’s approach is not designed to “duplicate, replace or contradict regulators” existing statutory definitions of similar principles.
  • Its work on AI: the Strategy also points out that AI is not a new technology and consequently it has already been “regulating this field for well over a decade,” citing as an example its 2014 report on Big Data, Artificial Intelligence, Machine Learning, and Data Protection. The Strategy describes: (i) the range of guidance the ICO has supplied to-date on AI, including on specific applications of AI (such as biometric recognition technology); (ii) the advice and support services it provides for “AI innovators,” which includes services like the ‘Regulatory Sandbox’ and the ‘Innovation Hub’ (which partners with accelerators and incubators to mentor innovators); and (iii) both the regulatory action it can take, and that which it has already taken, against certain AI companies.
  • Upcoming developments: the Strategy highlights that AI – and specifically its application in biometric technologies – is one of the ICO’s three “focus areas” for 2024/2025, alongside children’s privacy and online tracking. The Strategy further details the next series of key developments that organisations can expect over the next months, which include: (i) a consultation series on generative AI; (ii) a consultation on biometric classification; (iii) updated guidance on AI and data protection; (iv) new Regulatory Sandbox projects; and (v) Innovation Hub projects.
  • Collaboration with other regulators: finally, the Strategy sets out its ongoing work with other regulators, such as the Digital Regulation Cooperation Forum through which it works with the Competition and Markets Authority, the Office of Communications, and the FCA. The ICO also founded the Regulators and AI Working Group in 2019 which consists of many contributing regulators and public authorities, including the Department for Science, Innovation and Technology and the Advertising Standards Authority. The ICO has collaborated with the UK government, with standards bodies and with international partners, such as the Office of the Australian Information Commissioner, in various initiatives including investigations of global AI companies. In its work with standards bodies, the ICO not only monitors the introduction of new standards, but notes that it actively inputs information into the development of these standards, for example, into the ISO/IEC 4200 1:2023 AI Management System Standard and the ISO/IEC 23894:2023 on AI Risk Management Standard.

This post is as of the posting date stated above. Sidley Austin LLP assumes no duty to update this post or post about any subsequent developments having a bearing on this post.