Algorithms touch upon multiple aspects of digital life, and their use potentially falls within several separate – though converging – regulatory systems. More than ever, a ‘joined up’ approach is required to assess them, and the UK’s main regulators are working together to try to formulate a coherent policy, setting an interesting example that could be a template for global approaches to digital regulation.
On 28th April 2022, the UK’s Digital Regulation Cooperation Forum (DRCF)—a body bringing together four UK regulators—published two papers on algorithmic processing which focus on the risk/benefit analysis and the ways in which algorithms can be audited and regulated.
What is the DRCF?
The DRCF is a body which brings together four UK regulators tasked with regulating digital services: the Competition and Markets Authority (CMA), the Information Commissioner’s Office (ICO), the Office of Communications (Ofcom) and the Financial Conduct Authority (FCA). The main objectives of the DRCF, as set out in its plan of work for 2022 to 2023, include:
- Coherence between regimes – where regulatory regimes intersect, the DRCF endeavours to resolve potential tensions, offering clarity for people and businesses;
- Collaboration on projects – the DRCF aims to works collaboratively on areas of common interest and jointly address complex problems; and
- Capability building across regulators – by working together, the DRCF believes it can more efficiently develop and retain the right skills, knowledge and expertise and organisational capability to deliver effective digital regulation for people and business.
Key takeaways from the DRCF algorithm discussion papers
Algorithmic processing is one of several priority areas for strategic joint work between the DRCF members. The DRCF has produced two discussion papers based on feedback provided in a series of bilateral engagements with its members. Key takeaways from the DRCF’s discussion papers include:
- Algorithms offer many benefits, both to individuals and society, and such benefits can increase with continued responsible innovation.
- When consumers see evidence of and experience the benefits of algorithms (e.g., increased productivity and better ways of sourcing and summarising information), they trust the firms facilitating those benefits, in turn stimulating markets and driving economic growth.
- Harm arising from algorithms can occur both intentionally and inadvertently.
- Examples of intentional harm include automating spear phishing attacks and the creation of ‘deepfake’ content. More often, however, algorithm-induced harm is unintentional – e.g., where the underlying dataset used by the algorithm is unrepresentative and results in biased outcomes.
- Those procuring and/or using algorithms often know little about their origins and limitations.
- The purchasers and users of algorithms often have little understanding of how they were developed and how they perform in different contexts. As such, they face difficulties in mitigating associated risks.
- A lack of visibility can undermine accountability.
- Customers are frequently unaware of the use of algorithms, for example, as part of a lending assessment or when being recommended content online. This can result in difficulties in people exercising their rights (e.g., under the GDPR), and may mean that the operators of algorithms face insufficient scrutiny from the data subjects, regulators, civil society and sometimes even from the business’s leadership itself.
- Human oversight is not a fool-proof safeguard against harms.
- Human operators often struggle to interpret the results of algorithmic processing. Some also place too much faith in its effectiveness and do not adequately scrutinise its outputs.
- There are limitations to the DRCF members’ current understanding of the risks associated with algorithmic processing.
- Owing to the pace of innovation and the ever-increasing use cases for algorithmic processing, there are significant knowledge gaps among the DRCF members, as well as many misconceptions.
- There are several issues in the current – nascent – audit landscape.
- The algorithm audit landscape generally lacks specific rules and standards, it is unclear which standards audits should follow, and auditors are often limited by a lack of access to systems. Further, following audits, there is sometimes insufficient action and there are currently few avenues to seek redress. Regulators could play an important role in developing and shaping solutions to address these issues.
Potential actions / next steps
These discussion papers identify a number of opportunities for the DRCF members to co-ordinate and collaborate to foster a more robust regulatory environment, which could include the potential development of algorithmic assessment practices, helping organisations communicate more information to consumers about how and where algorithmic systems are used, and engaging with researchers.
The DRCF’s consultation seeks input from stakeholders on the use of algorithms and how its members might best approach their use from a regulatory standpoint. Specifically, they have requested stakeholders’ views on the following:
- overall reflections on the findings of the paper on algorithmic processing;
- other issues the DRCF could focus on;
- areas of focus the DRCF has the most potential to influence and which area of focus it should prioritise;
- outputs consumers and individuals would find useful from the DRCF to assist them in navigating the algorithmic processing ecosystem in a way that serves their interests;
- evidence on the harms and benefits of algorithmic systems;
- advantages and disadvantages of each of the hypotheses outlined in the paper on algorithmic auditing, related to the potential role for regulators in the algorithmic audit landscape;
- hypotheses the DRCF should test and explore further; and
- other actions the DRCF should consider undertaking in the algorithmic auditing space.
It seems likely that this consultation will be foundational to future government and regulatory policy, representing a critical opportunity to contribute to the debate. The CMA’s inaugural Data, Technology and Analytics conference, taking place on 15-16th June, will explore many of these issues and will be covered in a future post.
This post is as of the posting date stated above. Sidley Austin LLP assumes no duty to update this post or post about any subsequent developments having a bearing on this post.