Digital Health and Artificial Intelligence: New Developments From President Biden’s Executive Order
The Biden administration’s executive order issued on October 30, 2023, includes a number of initiatives relating to the development and use of artificial intelligence (AI), including in healthcare. As AI becomes a pivotal point of innovation for the healthcare industry, digital health healthcare technology developers, private equity sponsors, and other key industry stakeholders should track the regulatory frameworks certain to be developed following this executive order to better inform strategies for developing drugs and devices and assessing deals involving AI.
President Biden Signs Sweeping Artificial Intelligence Executive Order
On October 30, 2023, President Joe Biden issued an executive order (EO or the Order) on Safe, Secure, and Trustworthy Artificial Intelligence (AI) to advance a coordinated, federal governmentwide approach toward the safe and responsible development of AI. It sets forth a wide range of federal regulatory principles and priorities, directs myriad federal agencies to promulgate standards and technical guidelines, and invokes statutory authority — the Defense Production Act — that has historically been the primary source of presidential authorities to commandeer or regulate private industry to support the national defense. The Order reflects the Biden administration’s desire to make AI more secure and to cement U.S. leadership in global AI policy ahead of other attempts to regulate AI — most notably in the European Union and United Kingdom and to respond to growing competition in AI development from China.
New Export Controls on Advanced Computing and Semiconductor Manufacturing: Five Key Takeaways
On October 25, 2023, the U.S. Department of Commerce Bureau of Industry and Security (BIS) published updated export controls on advanced computing items and semiconductor manufacturing equipment under the Export Administration Regulations (EAR). Specifically, BIS published two interim final rules that revise and expand on the restrictions implemented in the initial interim final rule issued on October 7, 2022 (October 7, 2022 rule).1
AI Foundation Models: UK CMA’s Initial Report
The CMA has set out its emerging thinking on the functioning of competition and consumer protection in the market for foundation models.

Schumer Framework May Forge U.S. Model on AI Governance
*This article first appeared on Law360 on September 5, 2023.
This summer, Senate Majority Leader Chuck Schumer proposed a distinctive new framework to develop a comprehensive artificial intelligence regulatory policy that is intended to be adamantly bipartisan and committed, as a first principle, to preserving innovation and intellectual property rights.
Regulatory Update: National Association of Insurance Commissioners Summer 2023 National Meeting
The National Association of Insurance Commissioners (NAIC) held its Summer 2023 National Meeting (Summer Meeting) from August 12–16, 2023. Highlights include continued development of accounting principles and investment limitations related to certain types of bonds and structured securities, continued discussion of considerations related to private equity ownership of insurers, a proposed model bulletin addressing the use of artificial intelligence by the insurance industry, and continued development of a new consumer privacy protections model law.
EU, U.S., and UK Regulatory Developments on the Use of Artificial Intelligence in the Drug Lifecycle
Globally, the rapid advancement of artificial intelligence (AI) and machine learning (ML) raises fundamental questions about how the technology can be used. Drug approval authorities are now also taking part in this discussion, resulting in emerging and evolving guidelines and principles for drug companies.

UK ICO Scrutinizes Use of Generative AI
Following the EU’s increased focus on generative AI with the inclusion of foundation and generative AI in the latest text of the EU AI Act (see our post here), the UK now also follows suit, with the UK’s Information Commissioner’s Office (“ICO”) communicating on 15 June 2023 its intention to “review key businesses’ use of generative AI.” The ICO warned businesses not to be “blind to AI risks” especially in a “rush to see opportunity” with generative AI. Generative AI is capable of generating content e.g., complex text, images, audio or video, etc. and is viewed as involving more risk than other AI models because of its ability to be used across different sectors (e.g., law enforcement, immigration, employment, insurance and health), and so have a greater impact across society – including in relation to vulnerable groups.
AI and the Role of the Board of Directors
SEC Proposes Sweeping New Rules on Use of Data Analytics by Broker-Dealers and Investment Advisers
On July 26, 2023, the U.S. Securities and Exchange Commission (SEC or Commission) proposed new rules for broker-dealers (Proposed Rule 15(1)-2) and investment advisers (Proposed Rule 211(h)(2)-4) on the use of predictive data analytics (PDA) and PDA-like technologies in any interactions with investors.1 However, as discussed below, the scope of a “covered technology” subject to the rules is much broader than what most observers would consider to constitute predictive data analytics. The proposal would require that anytime a broker-dealer or investment adviser uses a “covered technology” in connection with engaging or communicating with an investor (including exercising investment discretion on behalf of an investor), the broker-dealer or investment adviser must evaluate that technology for conflicts of interest and eliminate or neutralize those conflicts of interest. The proposed rules would apply even if the interaction with the investor does not rise to the level of a “recommendation.”

