Australia’s Digital Platform Regulators Release Working Papers on Risks and Harms Posed by Algorithms and Large Language Models

Australia’s Digital Platform Regulators Forum (DP-REG) has recently released two working papers relevant to developing AI policy on the global stage: Literature summary: Harms and risks of algorithms (Algorithms WP) and Examination of technology: Large language models used in generative artificial intelligence (LLM WP) (together, the Working Papers) to mark the launch of its website. The DP-REG, which comprises various prominent Australian regulators across multiple industries, was established to ensure a collaborative and cohesive approach to the regulation of digital platform technologies in Australia. The Working Papers focus on understanding the risks and harms, as well as evaluating the benefits, of algorithms and generative artificial intelligence, and provides recommendations on the Australian Federal Government’s response to AI. The Working Papers also serve as a useful resource for the Australian industry and the public as these technologies are increasingly integrated and engaged with in the Australian market. Interestingly, the recommendations set out in the Working Papers are broadly aligned with the requirements of the EU’s Artificial Intelligence Act, which reached political agreement on 8 December 2023, suggesting that Australia’s proposed approach to regulating AI may be inspired at least in part by Europe’s AI regulatory framework.

(more…)

EU Reaches Historical Agreement on AI Act

On 8 December 2023 — following three days of lengthy and intensive negotiations — EU legislators reached political agreement on the world’s first stand-alone law regulating AI: the EU’s AI Act. The EU considers the AI Act as one of its key pieces of legislation and fundamental to ensuring the EU becomes the world’s leading digital economy.

‘World-First’ Agreement on AI Reached

Over one hundred representatives from across the globe convened in the UK on 1-2 November 2023 at the Global AI Safety Summit. The focus of the Summit was to discuss how best to manage the risks posed by the most recent advances in AI. However, it was the “Bletchley Declaration” –announced at the start of the Summit—which truly emphasized the significance governments are attributing to these issues. (more…)

Latest Developments on AI in the EU: the Saga Continues

EU AI Act

Up until recently, political agreement on the final text of the EU Artificial Intelligence Regulation (AI Act) was expected on 6 December 2023. However, latest developments indicated roadblocks in the negotiations due to three key discussion points – please see our previous blog post here. EU officials are reported to be meeting twice this week to discuss a compromise mandate on EU governments’ position on the text, in preparation of the political meeting on 6 December. (more…)

Agreement Reached on the EU’s Data Act

On 27 November 2023, the Council adopted the final text of the Data Act which facilitates (and in certain cases, mandates) the access to (personal and non-personal) data. The Data Act was originally proposed by the European Commission in 2022. Alongside the EU Data Governance Act (which came into force in June 2022) the Data Act forms part of the EU’s Data Strategy which aims to “make the EU a leader in a data-driven society”. (more…)

Preparing for the EU AI Act

Join Sidley and OneTrust DataGuidance for a webinar on the EU AI Act. This discussion with industry panellists will cover initial reactions to the (anticipated) political agreement on the EU AI Act following key negotiations by the European legislative bodies on December 6, 2023.

(more…)

Insights from the IAPP Europe Data Protection Congress: Regulatory Convergence on AI and Sidley’s Women in Privacy Networking Lunch

The International Association of Privacy Professionals (IAPP) held its annual Europe Data Protection Congress in Brussels on November 15 & 16, 2023. Whilst the Congress covered a wide range of topics related to privacy, cybersecurity and the regulation of data more broadly, unsurprisingly a recurring theme throughout was the responsible development, commercialization and use of AI. In this regard panelists explored (amongst other things) what practical and effective AI governance may look like, the role of a Digital Ethics Officer, how to strike a balance between enabling innovation and safeguarding individual rights, and how AI may be used to automate data breach detection and response.

(more…)

The Tenth Edition of Lexology In-Depth: Privacy, Data Protection and Cybersecurity (formerly The Privacy, Data Protection and Cybersecurity Law Review) is now available

The tenth edition of Lexology In-Depth: Privacy, Data Protection and Cybersecurity (formerly The Privacy, Data Protection and Cybersecurity Law Review) provides a global overview of the evolving legal and regulatory regimes governing data privacy and security, at a time when both privacy and security are increasingly challenged by the fast-paced development of technologies such as large language models, generative AI, and self-teaching/self-replicating applications. A number of lawyers from Sidley’s global Privacy and Cybersecurity practice have contributed to this publication. See the chapters below for a closer look at this developing area of law.

(more…)

EU Moving Closer to an AI Act?

On 24 October 2023, the European Parliament and Member States concluded a fourth round of trilogue discussions on the draft Artificial Intelligence Regulation (AI Act). Policymakers agreed on provisions to classify high-risk AI systems and also developed general guidance for the use of “enhanced” foundation models. However, the negotiations did not lead to substantial progress on provisions for prohibitions in relation to the use of AI by law enforcement. The next round of trilogue discussions will take place on 6 December 2023.

(more…)

President Biden Signs Sweeping Artificial Intelligence Executive Order

On October 30, 2023, President Joe Biden issued an executive order (EO or the Order) on Safe, Secure, and Trustworthy Artificial Intelligence (AI) to advance a coordinated, federal governmentwide approach toward the safe and responsible development of AI. It sets forth a wide range of federal regulatory principles and priorities, directs myriad federal agencies to promulgate standards and technical guidelines, and invokes statutory authority — the Defense Production Act — that has historically been the primary source of presidential authorities to commandeer or regulate private industry to support the national defense. The Order reflects the Biden administration’s desire to make AI more secure and to cement U.S. leadership in global AI policy ahead of other attempts to regulate AI — most notably in the European Union and United Kingdom and to respond to growing competition in AI development from China.

(more…)