Following the EU’s increased focus on generative AI with the inclusion of foundation and generative AI in the latest text of the EU AI Act (see our post here), the UK now also follows suit, with the UK’s Information Commissioner’s Office (“ICO”) communicating on 15 June 2023 its intention to “review key businesses’ use of generative AI.” The ICO warned businesses not to be “blind to AI risks” especially in a “rush to see opportunity” with generative AI. Generative AI is capable of generating content e.g., complex text, images, audio or video, etc. and is viewed as involving more risk than other AI models because of its ability to be used across different sectors (e.g., law enforcement, immigration, employment, insurance and health), and so have a greater impact across society – including in relation to vulnerable groups.
UK Approach to AI Regulation
In our previous blog post (available here), we noted that the UK had set out its “light touch” approach to AI regulation in its “Pro-Innovation” AI Whitepaper. The Whitepaper had been notable for its decentralised sector-specific approach, with no new legislation proposed. Instead, the Whitepaper promoted AI regulation through sector-specific, principles-based guidance and existing laws. This approach contrasts with the EU’s proposed AI Act which is a standalone piece of horizontal legislation regulating all AI systems, irrespective of industry. However, there have been recent indications that the UK is reconsidering its approach to AI regulation and may shift to a more intensive approach, potentially even with AI-specific legislation. For instance, the UK Prime Minister held a meeting with key generative AI providers and emphasized responsible use of AI. The UK Government has also recently communicated its plans to host a global AI safety summit and criticism has been expressed by a UK regulator arguing that the UK currently does not have adequate resources to control the use of AI.
UK ICO and AI
As the UK’s Data Protection Authority, the ICO will focus on reviewing data protection-related risks associated with generative AI and noted that “there can be no excuse for ignoring risks to people’s rights and freedoms before [a generative AI] rollout.” In particular, Stephen Almond, the ICO’s Executive Director of Regulatory Risk noted that businesses should: “[s]pend time at the outset to understand how AI is using personal information, mitigate any risks you become aware of, and then roll out your AI approach with confidence that it won’t upset customers or regulators.”
The UK ICO has recently updated its Guidance on AI and data protection, along with the accompanying AI and data protection risk toolkit which helps businesses assess and mitigate AI risks. Further, the ICO has issued concise guidance in the form of 8 questions which organizations developing or using generative AI should consider. This guidance suggests several key points including whether the company has a lawful basis under the GDPR for any data processing relating to generative AI, and questions around mitigating security risks.
This post is as of the posting date stated above. Sidley Austin LLP assumes no duty to update this post or post about any subsequent developments having a bearing on this post.