Policymakers around the world took significant steps toward regulating artificial intelligence (AI) in 2023. Spurred by the launch of revolutionary large language models such as OpenAI’s GPT series of models, debates surrounding the benefits and risks of AI have been brought into the foreground of political thought. Indeed, over the past year, legislative forums, editorial pages, and social media platforms were dominated by AI discourse. And two global races have kicked into high gear: Who will develop and deploy the most cutting-edge, possibly risky AI models, and who will govern them? In this article, published by the Lawfare Institute in cooperation with Brookings, Sidley lawyers Alan Charles Raul and Alexandra Mushka suggest that “the United States intends to run ahead of the field on AI governance, analogous to U.S. leadership on cybersecurity rules and governance—and unlike the policy void on privacy that the federal government has allowed the EU to fill.”
Join Sidley and OneTrust DataGuidance for a reactionary webinar on the recently published, near-final text of the EU AI Act on February 5, 2024. This discussion with industry panelists will cover initial reactions to the text of the EU AI Act following finalization by EU legislators and examine the key points in the AI Act that businesses need to understand.
On 30 November 2023, the EU reached political agreement on the Cyber Resilience Act (“CRA”), the first legislation globally to regulate cybersecurity for digital and connected products that are designed, developed, produced and made available on the EU market. The CRA was originally proposed by the European Commission in September 2022. Alongside the recently adopted Data Act, Digital Operational Resilience Act (“DORA”), Critical Entities Resilience Act (“CER”), Network and Information Systems Security 2 Directive (“NISD2”) and Data Governance Act, the CRA builds on the EU Data and Cyber Strategies, and complements upcoming certification schemes, such as the EU Cloud Services Scheme (“EUCS”) and the EU ICT Products Scheme (“EUCC”). It responds to an increase in cyber-attacks in the EU over the last few years – in particular the rise in software supply chain attacks which have tripled over the last year –as well as the significant rise in digital and connected products in daily life which magnifies the risk of such attacks.
Australia’s Digital Platform Regulators Forum (DP-REG) has recently released two working papers relevant to developing AI policy on the global stage: Literature summary: Harms and risks of algorithms (Algorithms WP) and Examination of technology: Large language models used in generative artificial intelligence (LLM WP) (together, the Working Papers) to mark the launch of its website. The DP-REG, which comprises various prominent Australian regulators across multiple industries, was established to ensure a collaborative and cohesive approach to the regulation of digital platform technologies in Australia. The Working Papers focus on understanding the risks and harms, as well as evaluating the benefits, of algorithms and generative artificial intelligence, and provides recommendations on the Australian Federal Government’s response to AI. The Working Papers also serve as a useful resource for the Australian industry and the public as these technologies are increasingly integrated and engaged with in the Australian market. Interestingly, the recommendations set out in the Working Papers are broadly aligned with the requirements of the EU’s Artificial Intelligence Act, which reached political agreement on 8 December 2023, suggesting that Australia’s proposed approach to regulating AI may be inspired at least in part by Europe’s AI regulatory framework.
On 8 December 2023 — following three days of lengthy and intensive negotiations — EU legislators reached political agreement on the world’s first stand-alone law regulating AI: the EU’s AI Act. The EU considers the AI Act as one of its key pieces of legislation and fundamental to ensuring the EU becomes the world’s leading digital economy.
Over one hundred representatives from across the globe convened in the UK on 1-2 November 2023 at the Global AI Safety Summit. The focus of the Summit was to discuss how best to manage the risks posed by the most recent advances in AI. However, it was the “Bletchley Declaration” –announced at the start of the Summit—which truly emphasized the significance governments are attributing to these issues. (more…)
EU AI Act
Up until recently, political agreement on the final text of the EU Artificial Intelligence Regulation (AI Act) was expected on 6 December 2023. However, latest developments indicated roadblocks in the negotiations due to three key discussion points – please see our previous blog post here. EU officials are reported to be meeting twice this week to discuss a compromise mandate on EU governments’ position on the text, in preparation of the political meeting on 6 December. (more…)
On 27 November 2023, the Council adopted the final text of the Data Act which facilitates (and in certain cases, mandates) the access to (personal and non-personal) data. The Data Act was originally proposed by the European Commission in 2022. Alongside the EU Data Governance Act (which came into force in June 2022) the Data Act forms part of the EU’s Data Strategy which aims to “make the EU a leader in a data-driven society”. (more…)
Join Sidley and OneTrust DataGuidance for a webinar on the EU AI Act. This discussion with industry panellists will cover initial reactions to the (anticipated) political agreement on the EU AI Act following key negotiations by the European legislative bodies on December 6, 2023.
The International Association of Privacy Professionals (IAPP) held its annual Europe Data Protection Congress in Brussels on November 15 & 16, 2023. Whilst the Congress covered a wide range of topics related to privacy, cybersecurity and the regulation of data more broadly, unsurprisingly a recurring theme throughout was the responsible development, commercialization and use of AI. In this regard panelists explored (amongst other things) what practical and effective AI governance may look like, the role of a Digital Ethics Officer, how to strike a balance between enabling innovation and safeguarding individual rights, and how AI may be used to automate data breach detection and response.