Compliance Programs Expected to Evolve With Technology: DOJ Updates Corporate Compliance Guidance to Include Artificial Intelligence
On September 23, 2024, the U.S. Department of Justice (DOJ) updated its Evaluation of Corporate Compliance Programs (the ECCP) to reflect DOJ’s evolving expectations with respect to corporate compliance programs, including how those programs appropriately address the compliance risks of new technology such as artificial intelligence (AI). While the ECCP is drafted as a guidance document for prosecutors to assess the effectiveness and adequacy of a company’s compliance program, the ECCP also is a tool for companies to conduct a similar assessment. With DOJ’s most recent update to this document, this tool now reflects DOJ’s focus on disruptive technology risks. This Update provides some general background on the ECCP and analyzes DOJ’s latest revisions to the ECCP, including the introduction of questions and considerations for companies concerning their use of new and emerging technology such as AI.
Pharma Companies in Beijing Free Trade Zone to Benefit from Relaxed Data Transfer Rules
On August 30, 2024, the Beijing Municipal Cyberspace Administration, Beijing Municipal Commerce Bureau and Beijing Municipal Government Services and Data Administration Bureau jointly released the “Administrative Measures for the Data Exit Negative List of the China (Beijing) Pilot Free Trade Zone (Trial)” (Administrative Measures) and the “Data Exit Administration List (Negative List) of the China (Beijing) Pilot Free Trade Zone (2024 Edition)” (Negative List) to facilitate the export of important industry data and personal information out of the country by companies operating in the Beijing free trade zone (FTZ). (more…)
U.S. FTC’s New Rule on Fake and AI-Generated Reviews and Social Media Bots
On August 14, 2024, the United States Federal Trade Commission (FTC) announced a final rule that prohibits fake and artificial intelligence-generated consumer reviews, consumer testimonials, and celebrity testimonials, along with other types of unfair or deceptive practices involving reviews and testimonials. This new rule is the latest development in the FTC’s increased rulemaking efforts and increased focus on AI, and will take effect on October 21, 2024.
Asia-Pacific Regulations Keep Pace With Rapid Evolution of Artificial Intelligence Technology
Regulation of artificial intelligence (AI) technology in the Asia-Pacific region (APAC) is developing rapidly, with at least 16 jurisdictions having some form of AI guidance or regulation. Some countries are implementing AI-specific laws and regulation, while others take a more “soft” law approach in reliance on nonbinding principles and standards. While regulatory approaches in the region differ, policy drivers feature common principles including responsible use, data security, end-user protection, and human autonomy.
UK proposes New Cyber Security and Resilience Bill to Boost the UK’s Cyber Defences
During the King’s Speech on 17 July 2024, the newly appointed UK Prime Minister announced the UK Government’s intention to introduce a new Cyber Security and Resilience Bill to strengthen the UK’s defences against the global rise in cyberattacks and to protect the UK’s critical infrastructure. In background briefing notes published together with the King’s Speech, the UK Government stated that the new Cyber Security and Resilience Bill will “strengthen our defences and ensure that more essential digital services than ever before are protected.” According to the briefing notes, the Cyber Security and Resilience Bill intends to address the concern that the UK has not kept up-to-date with recent legislative advancements made by the EU in the cybersecurity space, resulting in the UK being “comparably more vulnerable.” Although the form of the proposed Cyber Security and Resilience Bill has yet to be released, the UK Government has indicated that it plans to introduce the bill in the coming months.
Artificial Intelligence Tops Agenda for Global Competition Authorities: EU, UK, and U.S. Issue Joint Statement
On July 23, 2024, the competition authorities of the EU, the UK, and the U.S. issued a joint statement on competition in generative artificial intelligence (AI) foundation models and AI products (Joint Statement). Since the emergence of generative AI, each of the authorities has been individually ramping up its work in order to understand better the potential risks to competition that AI may pose. The Joint Statement may herald a more joined-up global approach with respect to scrutiny of competition in AI.
An Artificial Intelligence, Privacy, and Cybersecurity Update for Indian Companies Doing Business in the United States and Europe
Pivotal shifts have occurred in global data privacy, artificial intelligence (AI), and cybersecurity from executives facing more pressure to monitor their organizations’ cybersecurity operations, to an unprecedented wave of consumer data privacy laws and rapid advancements in AI technology use and deployment. Indian organizations should establish best practices to address these new (and emerging) laws, regulations, and frameworks.
One Step Closer: AI Act Approved by Council of the EU
On 21 May 2024, the Council of the European Union approved the EU Artificial Intelligence Act (the “AI Act”). This is the final stage in the legislative process and comes after the EU Parliament voted to adopt the legislation on 13 March 2024. This final vote clears the path for the formal signing of the legislation and its publication in the Official Journal of the EU in the coming weeks. The AI Act will then enter into force 20 days after such publication with staggered transition periods of 6 to 36 months.
ICO Publishes Its Strategic Approach to Regulating AI
On 30 April 2024, the UK’s Information Commissioner’s Office (“ICO”) published its strategic approach to regulating artificial intelligence (“AI”) (the “Strategy”), following the UK government’s request that key regulators set out their approach to AI regulation and compliance with the UK government’s previous AI White Paper (see our previous blog post here). In its Strategy, the ICO sets out: (i) the opportunities and risks of AI; (ii) the role of data protection law; (iii) its work on AI; (iv) upcoming developments; and (v) its collaboration with other regulators. The publication of the ICO’s Strategy follows the recent publication of the Financial Conduct Authority’s (“FCA”) approach to regulating AI.
Top 10 Questions on the EU AI Act
The EU AI Act will be the first standalone piece of legislation worldwide regulating the use and provision of AI in the EU, and will form a key consideration in AI governance programs. The AI Act will have a significant impact on many organizations inside and outside the EU, with failure to comply potentially leading to fines of up to 7% of annual worldwide turnover.