Over one hundred representatives from across the globe convened in the UK on 1-2 November 2023 at the Global AI Safety Summit. The focus of the Summit was to discuss how best to manage the risks posed by the most recent advances in AI. However, it was the “Bletchley Declaration” –announced at the start of the Summit—which truly emphasized the significance governments are attributing to these issues. (more…)
EU AI Act
Up until recently, political agreement on the final text of the EU Artificial Intelligence Regulation (AI Act) was expected on 6 December 2023. However, latest developments indicated roadblocks in the negotiations due to three key discussion points – please see our previous blog post here. EU officials are reported to be meeting twice this week to discuss a compromise mandate on EU governments’ position on the text, in preparation of the political meeting on 6 December. (more…)
Join Sidley and OneTrust DataGuidance for a webinar on the EU AI Act. This discussion with industry panellists will cover initial reactions to the (anticipated) political agreement on the EU AI Act following key negotiations by the European legislative bodies on December 6, 2023.
The US state data privacy landscape is fast evolving into a patchwork of broad state privacy laws that govern for-profit and non-profit entities that meet certain threshold criteria and the personal information of residents in each of those states. In Part 2 of the OneTrust DataGuidance Insight articles on state data privacy laws, Sidley Austin lawyer Sheri Porath Rockwell compares the scope and enforcement provisions of the comprehensive data privacy laws that have been enacted in 13 states to date. While individual state data privacy laws share common features of transparency, data subject rights, opt-outs for sales and targeted advertising, and no private right of action, there are significant differences among them, including with respect to the types of entities and data that are in scope and enforcement approaches.
The International Association of Privacy Professionals (IAPP) held its annual Europe Data Protection Congress in Brussels on November 15 & 16, 2023. Whilst the Congress covered a wide range of topics related to privacy, cybersecurity and the regulation of data more broadly, unsurprisingly a recurring theme throughout was the responsible development, commercialization and use of AI. In this regard panelists explored (amongst other things) what practical and effective AI governance may look like, the role of a Digital Ethics Officer, how to strike a balance between enabling innovation and safeguarding individual rights, and how AI may be used to automate data breach detection and response.
On 24 October 2023, the European Parliament and Member States concluded a fourth round of trilogue discussions on the draft Artificial Intelligence Regulation (AI Act). Policymakers agreed on provisions to classify high-risk AI systems and also developed general guidance for the use of “enhanced” foundation models. However, the negotiations did not lead to substantial progress on provisions for prohibitions in relation to the use of AI by law enforcement. The next round of trilogue discussions will take place on 6 December 2023.
The Biden administration’s executive order issued on October 30, 2023, includes a number of initiatives relating to the development and use of artificial intelligence (AI), including in healthcare. As AI becomes a pivotal point of innovation for the healthcare industry, digital health healthcare technology developers, private equity sponsors, and other key industry stakeholders should track the regulatory frameworks certain to be developed following this executive order to better inform strategies for developing drugs and devices and assessing deals involving AI.
On October 30, 2023, President Joe Biden issued an executive order (EO or the Order) on Safe, Secure, and Trustworthy Artificial Intelligence (AI) to advance a coordinated, federal governmentwide approach toward the safe and responsible development of AI. It sets forth a wide range of federal regulatory principles and priorities, directs myriad federal agencies to promulgate standards and technical guidelines, and invokes statutory authority — the Defense Production Act — that has historically been the primary source of presidential authorities to commandeer or regulate private industry to support the national defense. The Order reflects the Biden administration’s desire to make AI more secure and to cement U.S. leadership in global AI policy ahead of other attempts to regulate AI — most notably in the European Union and United Kingdom and to respond to growing competition in AI development from China.
On September 21, 2023, the UK and the U.S. announced the UK extension to the EU-U.S. Data Privacy Framework (DPF), which will come into effect on October 12. A new UK adequacy regulation provides that the UK Secretary of State for Science, Innovation and Technology has determined that the U.S. provides adequate levels of protection for personal data in certain transfers and brings the UK within the DPF announced in July 2023. The U.S. Attorney General also designated the UK as a “qualifying state” under an Executive Order on September 18 for the purposes of the DPF. This means that on October 12, UK businesses will be able to transfer personal data to U.S. organizations self-certified under the DPF.
Companies are facing more attacks on their information systems. And, as their cyber risk skyrockets, the SEC has stepped in with new regulations, telling businesses what to disclose about these incidents — and requiring detailed disclosures on cyber risk management more broadly. With the deadline for compliance fast approaching, businesses are scrambling to mitigate their legal risk and comply with regulations that some say may be an overreach.