Policymakers around the world took significant steps toward regulating artificial intelligence (AI) in 2023. Spurred by the launch of revolutionary large language models such as OpenAI’s GPT series of models, debates surrounding the benefits and risks of AI have been brought into the foreground of political thought. Indeed, over the past year, legislative forums, editorial pages, and social media platforms were dominated by AI discourse. And two global races have kicked into high gear: Who will develop and deploy the most cutting-edge, possibly risky AI models, and who will govern them? In this article, published by the Lawfare Institute in cooperation with Brookings, Sidley lawyers Alan Charles Raul and Alexandra Mushka suggest that “the United States intends to run ahead of the field on AI governance, analogous to U.S. leadership on cybersecurity rules and governance—and unlike the policy void on privacy that the federal government has allowed the EU to fill.”
On 23 January 2024, the UK government published its draft Cyber Governance Code of Practice (the “Code”) to help directors and other senior leadership boost their organizations’ cyber resilience. The draft Code, which forms part of the UK’s wider £2.6bn National Cyber Strategy, was developed in conjunction with several industry experts and stakeholders – including the UK National Cyber Security Centre. The UK government is seeking views from organizations on the draft Code by 19 March 2024.
The staff of the Commodity Futures Trading Commission (CFTC) is seeking public comment (the Request for Comment) on the risks and benefits associated with use of artificial intelligence (AI) in the commodity derivatives markets. According to the Request for Comment, the staff “recognizes that use of AI may lead to significant benefits in derivatives markets, but such use may also pose risks relating to market safety, customer protection, governance, data privacy, mitigation of bias, and cybersecurity, among other issues.”
On January 17, 2024, the New York Department of Financial Services (NYDFS) entered into a consent order with Industrial and Commercial Bank of China Ltd. (ICBC or the Bank), resolving a matter in which ICBC’s New York branch disclosed confidential supervisory information (CSI) without authorization. The order includes a civil monetary penalty of $30 million. Two days later, the Board of Governors of the Federal Reserve System (Federal Reserve) entered into a consent cease-and-desist order with ICBC and its New York branch that includes a fine of approximately $2.4 million for the unauthorized disclosure of CSI. The Federal Reserve specifically noted that its action was taken in conjunction with the prior action of NYDFS.
The National Association of Insurance Commissioners (NAIC) held its Fall 2023 National Meeting (Fall Meeting) from November 30 through December 4, 2023. This Sidley Update summarizes the highlights from this meeting in addition to interim meetings held in lieu of taking place during the Fall Meeting. Highlights include adoption of a new model bulletin addressing the use of artificial intelligence in the insurance industry, continued development of accounting principles and investment limitations related to certain types of bonds and structured securities, and continued discussion of considerations related to private equity ownership of insurers.
Join our 7th annual Trend Watch webinar to learn how tactical decision-making can help you conquer California’s challenging legal environment. Our focus areas will include:
- New developments in California privacy law
- Prop. 65 by the numbers
- Need-to-know environmental law changes
EU AI Act
Up until recently, political agreement on the final text of the EU Artificial Intelligence Regulation (AI Act) was expected on 6 December 2023. However, latest developments indicated roadblocks in the negotiations due to three key discussion points – please see our previous blog post here. EU officials are reported to be meeting twice this week to discuss a compromise mandate on EU governments’ position on the text, in preparation of the political meeting on 6 December. (more…)
On 24 October 2023, the European Parliament and Member States concluded a fourth round of trilogue discussions on the draft Artificial Intelligence Regulation (AI Act). Policymakers agreed on provisions to classify high-risk AI systems and also developed general guidance for the use of “enhanced” foundation models. However, the negotiations did not lead to substantial progress on provisions for prohibitions in relation to the use of AI by law enforcement. The next round of trilogue discussions will take place on 6 December 2023.
On October 30, 2023, President Joe Biden issued an executive order (EO or the Order) on Safe, Secure, and Trustworthy Artificial Intelligence (AI) to advance a coordinated, federal governmentwide approach toward the safe and responsible development of AI. It sets forth a wide range of federal regulatory principles and priorities, directs myriad federal agencies to promulgate standards and technical guidelines, and invokes statutory authority — the Defense Production Act — that has historically been the primary source of presidential authorities to commandeer or regulate private industry to support the national defense. The Order reflects the Biden administration’s desire to make AI more secure and to cement U.S. leadership in global AI policy ahead of other attempts to regulate AI — most notably in the European Union and United Kingdom and to respond to growing competition in AI development from China.
Companies are facing more attacks on their information systems. And, as their cyber risk skyrockets, the SEC has stepped in with new regulations, telling businesses what to disclose about these incidents — and requiring detailed disclosures on cyber risk management more broadly. With the deadline for compliance fast approaching, businesses are scrambling to mitigate their legal risk and comply with regulations that some say may be an overreach.