EU, U.S., and UK Regulatory Developments on the Use of Artificial Intelligence in the Drug Lifecycle
Globally, the rapid advancement of artificial intelligence (AI) and machine learning (ML) raises fundamental questions about how the technology can be used. Drug approval authorities are now also taking part in this discussion, resulting in emerging and evolving guidelines and principles for drug companies.

UK ICO Scrutinizes Use of Generative AI
Following the EU’s increased focus on generative AI with the inclusion of foundation and generative AI in the latest text of the EU AI Act (see our post here), the UK now also follows suit, with the UK’s Information Commissioner’s Office (“ICO”) communicating on 15 June 2023 its intention to “review key businesses’ use of generative AI.” The ICO warned businesses not to be “blind to AI risks” especially in a “rush to see opportunity” with generative AI. Generative AI is capable of generating content e.g., complex text, images, audio or video, etc. and is viewed as involving more risk than other AI models because of its ability to be used across different sectors (e.g., law enforcement, immigration, employment, insurance and health), and so have a greater impact across society – including in relation to vulnerable groups.
AI and the Role of the Board of Directors
SEC Proposes Sweeping New Rules on Use of Data Analytics by Broker-Dealers and Investment Advisers
On July 26, 2023, the U.S. Securities and Exchange Commission (SEC or Commission) proposed new rules for broker-dealers (Proposed Rule 15(1)-2) and investment advisers (Proposed Rule 211(h)(2)-4) on the use of predictive data analytics (PDA) and PDA-like technologies in any interactions with investors.1 However, as discussed below, the scope of a “covered technology” subject to the rules is much broader than what most observers would consider to constitute predictive data analytics. The proposal would require that anytime a broker-dealer or investment adviser uses a “covered technology” in connection with engaging or communicating with an investor (including exercising investment discretion on behalf of an investor), the broker-dealer or investment adviser must evaluate that technology for conflicts of interest and eliminate or neutralize those conflicts of interest. The proposed rules would apply even if the interaction with the investor does not rise to the level of a “recommendation.”

Singapore PDPC Consultation on New Guidance for Use of Personal Data in AI Systems
On July 18, 2023, Singapore’s data protection authority published proposed guidelines on the use of personal data in artificial intelligence (AI) systems. The guidelines will be up for public consultation until August 31, 2023, and aim to address how Singapore’s privacy laws will apply to organizations which develop or deploy AI systems. The draft guidelines underscore the significance placed by the privacy regulator on the need to ensure personal data protection, without discouraging organizations from responsibly using AI systems in their businesses. Accordingly, organizations interested in using AI can use the guidelines for insight into what privacy expectations lie in store once the guidelines are finalized.

Australian Government Commences Public Consultation on National Regulatory Framework for the “Safe and Responsible” Use of AI
On 1 June 2023, the Australian Government published the Safe and Responsible AI in Australia: Discussion Paper (“Discussion Paper”) to seek public feedback on identifying the potential gaps in the existing domestic governance landscape and possible additional AI governance mechanisms to support the “safe and responsible” development of AI. As noted in the Discussion Paper, although AI has been identified as a “critical technology in Australia’s national interest”, AI adoption rates across Australia remain relatively low. A key aim of the Discussion Paper is to inform the Australian Government on the steps that should be taken on AI regulation in order to increase “community trust and confidence in AI”. The Discussion Paper addresses a broad range of AI technologies and techniques, such as self-driving cars and generative pre-trained transformers (also known as GPT), and notes that any AI regulatory framework would need to consider existing as well as possible future uses of AI and any ensuing risks. The Discussion Paper has an eight (8) week consultation period ending on 26 July 2023.

European Parliament Adopts AI Act Compromise Text Covering Foundation and Generative AI
On 14 June 2023, the European Parliament adopted – by a large majority – its compromise text for the EU’s Artificial Intelligence Act (“AI Act”), paving the way for the three key EU Institutions (the European Council, Commission and Parliament) to start the ‘trilogue negotiations’. This is the last substantive step in the legislative process and it is now expected that the AI Act will be adopted and become law on or around December 2023 / January 2024. The AI Act will be a first-of-its-kind AI legislation with extraterritorial reach.
UK Sets Out It’s “Pro-Innovation” Approach To AI Regulation
On 29 March 2023, the UK’s Department for Science Innovation and Technology (“DSIT”) published its long awaited White Paper on its “pro-innovation approach to AI regulation” (the “White Paper”), along with a corresponding impact assessment. The White Paper builds on the “proportionate, light touch and forward-looking” approach to AI regulation set out in the policy paper published in July 2022. Importantly, the UK has decided to take a different approach to regulating AI compared to the EU, opting for a decentralised sector-specific approach, with no new legislation expected at this time. Instead, the UK will regulate AI primarily through sector-specific, principles based guidance and existing laws, with an emphasis on an agile and innovation-friendly approach. This is in significant contrast to the EU’s proposed AI Act which is a standalone piece of horizontal legislation regulating all AI systems, irrespective of industry.
EU Moving Closer to an AI Act – Key Areas of Impact for Life Sciences/MedTech Companies
The European Union is moving closer to adopting the first major legislation to horizontally regulate artificial intelligence. Today, the European Parliament (Parliament) reached a provisional agreement on its internal position on the draft Artificial Intelligence Regulation (AI Act). The text will be adopted by Parliament committees in the coming weeks and by the Parliament plenary in June. The plenary adoption will trigger the next legislative step of trilogue negotiations with the European Council to agree on a final text. Once adopted, according to the text, the AI Act will become applicable 24 months after its entry into force (or 36 months according to the Council’s position), which is currently expected in the second half of 2025, at the earliest.
U.S. Department of Commerce Seeks Input on AI Policy, Calls Trustworthy AI an Important Federal Objective
On April 13, 2023, the United States Department of Commerce National Telecommunication and Information Administration (“NTIA”) published a request for comment (“RFC”) seeking public input on Artificial Intelligence (“AI”) accountability. The RFC seeks to understand which measures—both self-regulatory and regulatory—have the capacity to ensure that AI systems are “legal, effective, ethical, safe, and otherwise trustworthy.” The RFC adopts a broad definition of “AI systems,” noting that they include all automated or algorithmic systems that generate predictions, recommendations, or decisions.