Unofficial Final Text of EU AI Act Released
On 22 January 2024, an unofficial version of the (presumed) final EU Artificial Intelligence Act (“AI Act”) was released. The AI Act reached political agreement early December 2023 (see our blog post here) and had undergone technical discussions to finalize the text since. It was reported that the document was shared with EU Member State Representatives on 21 January 2024, ahead of a discussion within the Telecom Working Party, a technical body of the EU Council on 24 January 2024, and that formal adoption at the EU Member State ambassador level (i.e. COREPER) will likely follow on 2 February. On Friday 26 January 2024, the Belgian Presidency of the Council officially shared the (analysis of the) final compromise text of the AI Act with Member State representatives – clearly indicating that this text will be put forward for adoption.
EU Reaches Political Agreement on Cyber Resilience Act for Digital and Connected Products
On 30 November 2023, the EU reached political agreement on the Cyber Resilience Act (“CRA”), the first legislation globally to regulate cybersecurity for digital and connected products that are designed, developed, produced and made available on the EU market. The CRA was originally proposed by the European Commission in September 2022. Alongside the recently adopted Data Act, Digital Operational Resilience Act (“DORA”), Critical Entities Resilience Act (“CER”), Network and Information Systems Security 2 Directive (“NISD2”) and Data Governance Act, the CRA builds on the EU Data and Cyber Strategies, and complements upcoming certification schemes, such as the EU Cloud Services Scheme (“EUCS”) and the EU ICT Products Scheme (“EUCC”). It responds to an increase in cyber-attacks in the EU over the last few years – in particular the rise in software supply chain attacks which have tripled over the last year –as well as the significant rise in digital and connected products in daily life which magnifies the risk of such attacks.
Australia’s Digital Platform Regulators Release Working Papers on Risks and Harms Posed by Algorithms and Large Language Models
Australia’s Digital Platform Regulators Forum (DP-REG) has recently released two working papers relevant to developing AI policy on the global stage: Literature summary: Harms and risks of algorithms (Algorithms WP) and Examination of technology: Large language models used in generative artificial intelligence (LLM WP) (together, the Working Papers) to mark the launch of its website. The DP-REG, which comprises various prominent Australian regulators across multiple industries, was established to ensure a collaborative and cohesive approach to the regulation of digital platform technologies in Australia. The Working Papers focus on understanding the risks and harms, as well as evaluating the benefits, of algorithms and generative artificial intelligence, and provides recommendations on the Australian Federal Government’s response to AI. The Working Papers also serve as a useful resource for the Australian industry and the public as these technologies are increasingly integrated and engaged with in the Australian market. Interestingly, the recommendations set out in the Working Papers are broadly aligned with the requirements of the EU’s Artificial Intelligence Act, which reached political agreement on 8 December 2023, suggesting that Australia’s proposed approach to regulating AI may be inspired at least in part by Europe’s AI regulatory framework.
‘World-First’ Agreement on AI Reached
Over one hundred representatives from across the globe convened in the UK on 1-2 November 2023 at the Global AI Safety Summit. The focus of the Summit was to discuss how best to manage the risks posed by the most recent advances in AI. However, it was the “Bletchley Declaration” –announced at the start of the Summit—which truly emphasized the significance governments are attributing to these issues. (more…)
AI Foundation Models: UK CMA’s Initial Report
The CMA has set out its emerging thinking on the functioning of competition and consumer protection in the market for foundation models.
EU, U.S., and UK Regulatory Developments on the Use of Artificial Intelligence in the Drug Lifecycle
Globally, the rapid advancement of artificial intelligence (AI) and machine learning (ML) raises fundamental questions about how the technology can be used. Drug approval authorities are now also taking part in this discussion, resulting in emerging and evolving guidelines and principles for drug companies.
EU Commission Adopts New Rules for GDPR Enforcement: the Beginning of a Centralized Enforcement Model?
On 4 July 2023, the EU Commission proposed a new Regulation for procedural rules to standardize and streamline cooperation between EU Member State Data Protection Authorities (DPAs) when enforcing the EU General Data Protection Regulation (GDPR) in cross-border cases (GDPR Procedural Regulation). The GDPR adopts a decentralized enforcement model. National EU Member State DPAs are competent to enforce the GDPR on their respective territories. However, in cases with cross-border elements, the GDPR requires all concerned DPAs to cooperate in accordance with the GDPR’s “one-stop-shop” through cooperation and consistency mechanisms. Although these mechanisms establish key principles of cooperation and provide the basis for consistent application of the GDPR throughout the EU, the EU Commission determined more legislative action was needed to increase efficiency and harmonization of cross-border GDPR enforcement action.
UK ICO Scrutinizes Use of Generative AI
Following the EU’s increased focus on generative AI with the inclusion of foundation and generative AI in the latest text of the EU AI Act (see our post here), the UK now also follows suit, with the UK’s Information Commissioner’s Office (“ICO”) communicating on 15 June 2023 its intention to “review key businesses’ use of generative AI.” The ICO warned businesses not to be “blind to AI risks” especially in a “rush to see opportunity” with generative AI. Generative AI is capable of generating content e.g., complex text, images, audio or video, etc. and is viewed as involving more risk than other AI models because of its ability to be used across different sectors (e.g., law enforcement, immigration, employment, insurance and health), and so have a greater impact across society – including in relation to vulnerable groups.
SEC Proposes Sweeping New Rules on Use of Data Analytics by Broker-Dealers and Investment Advisers
On July 26, 2023, the U.S. Securities and Exchange Commission (SEC or Commission) proposed new rules for broker-dealers (Proposed Rule 15(1)-2) and investment advisers (Proposed Rule 211(h)(2)-4) on the use of predictive data analytics (PDA) and PDA-like technologies in any interactions with investors.1 However, as discussed below, the scope of a “covered technology” subject to the rules is much broader than what most observers would consider to constitute predictive data analytics. The proposal would require that anytime a broker-dealer or investment adviser uses a “covered technology” in connection with engaging or communicating with an investor (including exercising investment discretion on behalf of an investor), the broker-dealer or investment adviser must evaluate that technology for conflicts of interest and eliminate or neutralize those conflicts of interest. The proposed rules would apply even if the interaction with the investor does not rise to the level of a “recommendation.”
European Parliament Adopts AI Act Compromise Text Covering Foundation and Generative AI
On 14 June 2023, the European Parliament adopted – by a large majority – its compromise text for the EU’s Artificial Intelligence Act (“AI Act”), paving the way for the three key EU Institutions (the European Council, Commission and Parliament) to start the ‘trilogue negotiations’. This is the last substantive step in the legislative process and it is now expected that the AI Act will be adopted and become law on or around December 2023 / January 2024. The AI Act will be a first-of-its-kind AI legislation with extraterritorial reach.