The CMA has set out its emerging thinking on the functioning of competition and consumer protection in the market for foundation models.
Globally, the rapid advancement of artificial intelligence (AI) and machine learning (ML) raises fundamental questions about how the technology can be used. Drug approval authorities are now also taking part in this discussion, resulting in emerging and evolving guidelines and principles for drug companies.
On 4 July 2023, the EU Commission proposed a new Regulation for procedural rules to standardize and streamline cooperation between EU Member State Data Protection Authorities (DPAs) when enforcing the EU General Data Protection Regulation (GDPR) in cross-border cases (GDPR Procedural Regulation). The GDPR adopts a decentralized enforcement model. National EU Member State DPAs are competent to enforce the GDPR on their respective territories. However, in cases with cross-border elements, the GDPR requires all concerned DPAs to cooperate in accordance with the GDPR’s “one-stop-shop” through cooperation and consistency mechanisms. Although these mechanisms establish key principles of cooperation and provide the basis for consistent application of the GDPR throughout the EU, the EU Commission determined more legislative action was needed to increase efficiency and harmonization of cross-border GDPR enforcement action.
Following the EU’s increased focus on generative AI with the inclusion of foundation and generative AI in the latest text of the EU AI Act (see our post here), the UK now also follows suit, with the UK’s Information Commissioner’s Office (“ICO”) communicating on 15 June 2023 its intention to “review key businesses’ use of generative AI.” The ICO warned businesses not to be “blind to AI risks” especially in a “rush to see opportunity” with generative AI. Generative AI is capable of generating content e.g., complex text, images, audio or video, etc. and is viewed as involving more risk than other AI models because of its ability to be used across different sectors (e.g., law enforcement, immigration, employment, insurance and health), and so have a greater impact across society – including in relation to vulnerable groups.
On July 26, 2023, the U.S. Securities and Exchange Commission (SEC or Commission) proposed new rules for broker-dealers (Proposed Rule 15(1)-2) and investment advisers (Proposed Rule 211(h)(2)-4) on the use of predictive data analytics (PDA) and PDA-like technologies in any interactions with investors.1 However, as discussed below, the scope of a “covered technology” subject to the rules is much broader than what most observers would consider to constitute predictive data analytics. The proposal would require that anytime a broker-dealer or investment adviser uses a “covered technology” in connection with engaging or communicating with an investor (including exercising investment discretion on behalf of an investor), the broker-dealer or investment adviser must evaluate that technology for conflicts of interest and eliminate or neutralize those conflicts of interest. The proposed rules would apply even if the interaction with the investor does not rise to the level of a “recommendation.”
On 14 June 2023, the European Parliament adopted – by a large majority – its compromise text for the EU’s Artificial Intelligence Act (“AI Act”), paving the way for the three key EU Institutions (the European Council, Commission and Parliament) to start the ‘trilogue negotiations’. This is the last substantive step in the legislative process and it is now expected that the AI Act will be adopted and become law on or around December 2023 / January 2024. The AI Act will be a first-of-its-kind AI legislation with extraterritorial reach.
In 2022, many if not most pharmaceutical, medical device, and other life sciences companies established strategies to innovate digital health technology complementary to their existing strategic focus. The digital transformation of the life sciences industry is still widely unfolding across the marketplace. In 2023 and beyond, the race is on to launch the next generation of digital health technologies to innovate the delivery of therapies to patients.
On October 4, 2022, the White House Office of Science and Technology Policy published The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People (the “AI Blueprint”). The AI Blueprint outlines non-binding guidelines for the development and deployment of automated systems and is the culmination of a year-long process of public engagement and deliberation.
Algorithms touch upon multiple aspects of digital life, and their use potentially falls within several separate – though converging – regulatory systems. More than ever, a ‘joined up’ approach is required to assess them, and the UK’s main regulators are working together to try to formulate a coherent policy, setting an interesting example that could be a template for global approaches to digital regulation. (more…)
On 22 September 2021, the UK Government (the “Government”) published its Artificial Intelligence (“AI”) strategy. The paper outlines the Government’s plan to make Britain a “global superpower” in the AI arena, and sets out an agenda to build the most “pro-innovation regulatory environment in the world”. This post highlights some of the key elements from the UK AI strategy. Significantly, the UK’s proposed approach may diverge in some respects from the EU’s GDPR. For example, the UK strategy includes consideration of whether to drop Article 22’s restrictions on automated decision-making, and whether to maintain the UK’s current sectoral approach to AI regulation. The UK will publish a White Paper on regulating AI in early 2022, which will provide a basis for further consultation and discussion with interested or affected groups before draft legislation is formally presented to the UK Parliament. (more…)