FTC Proposes Significant and Sweeping Changes to COPPA and Requests Public Comment

On January 11, 2024, the Federal Trade Commission (“FTC”) published its Notice of Proposed Rule Making (“NPRM”) seeking to update the FTC’s Children’s Online Privacy Protection Act (“COPPA”) Rule in the Federal Register.  Among other things, the proposed changes would require more granular privacy notices, require fairly detailed identification of, and parental consent to, third-party data sharing (including targeted advertising), expand the scope of personal information subject to COPPA, make it easier for parents to provide consent via text message, clarify various requirements around EdTech, including school authorization for parental consent, and impose significant new programmatic information security and data retention requirements.

(more…)

UK and Australian Governments Sign “world-first” Online Safety and Security Memorandum of Understanding

On 20 February 2024, the UK Government and the Australian Federal Government co-signed a historic Online Safety and Security Memorandum of Understanding (MoU) signifying the bilateral cooperation between the two countries to help boost their respective online safety regimes. Notably, this is the first arrangement of its kind, with the MoU intending to encompass a wide range of digital online safety and security issues. These include illegal content, child safety, age assurance, technology facilitated gender-based violence, and addressing harms caused by rapidly evolving technologies, such as generative artificial intelligence.

(more…)

U.S. Department of Justice Signals Tougher Enforcement Against Artificial Intelligence Crimes

U.S. Deputy Attorney General Lisa Monaco signaled robust future enforcement by the Department of Justice (DOJ) against crimes involving, and aided by, artificial intelligence (AI) in her remarks at Oxford University last week and reiterated shortly thereafter at the Munich Security Conference.

(more…)

The U.S. Plans to ‘Lead the Way’ on Global AI Policy

Policymakers around the world took significant steps toward regulating artificial intelligence (AI) in 2023. Spurred by the launch of revolutionary large language models such as OpenAI’s GPT series of models, debates surrounding the benefits and risks of AI have been brought into the foreground of political thought. Indeed, over the past year, legislative forums, editorial pages, and social media platforms were dominated by AI discourse. And two global races have kicked into high gear: Who will develop and deploy the most cutting-edge, possibly risky AI models, and who will govern them?  In this article, published by the Lawfare Institute in cooperation with Brookings, Sidley lawyers Alan Charles Raul and Alexandra Mushka suggest that “the United States intends to run ahead of the field on AI governance, analogous to U.S. leadership on cybersecurity rules and governance—and unlike the policy void on privacy that the federal government has allowed the EU to fill.”

(more…)

UK Publishes Cyber Governance Code of Practice for Consultation

On 23 January 2024, the UK government published its draft Cyber Governance Code of Practice (the “Code”) to help directors and other senior leadership boost their organizations’ cyber resilience. The draft Code, which forms part of the UK’s wider £2.6bn National Cyber Strategy, was developed in conjunction with several industry experts and stakeholders – including the UK National Cyber Security Centre. The UK government is seeking views from organizations on the draft Code by 19 March 2024.

(more…)

New Know-Your-Customer and Reporting Rules Proposed for Cloud Providers: Five Key Takeaways

Last week, the U.S. Department of Commerce published a notice of proposed rulemaking (NPRM) implementing Executive Orders (EO) 13984 and 14110 to prevent “foreign malicious cyber actors” from accessing U.S. infrastructure as a service products1 (IaaS Rule). The IaaS Rule seeks to strengthen the U.S. government’s ability to track “foreign malicious cyber actors” who have relied on U.S. IaaS products to steal intellectual property and sensitive data, engage in espionage activities, and threaten national security by attacking critical infrastructure.

(more…)

U.S. CFTC Seeks Public Input on Use of Artificial Intelligence in Commodity Markets and Simultaneously Warns of AI Scams

The staff of the Commodity Futures Trading Commission (CFTC) is seeking public comment (the Request for Comment) on the risks and benefits associated with use of artificial intelligence (AI) in the commodity derivatives markets. According to the Request for Comment, the staff “recognizes that use of AI may lead to significant benefits in derivatives markets, but such use may also pose risks relating to market safety, customer protection, governance, data privacy, mitigation of bias, and cybersecurity, among other issues.”

(more…)

Unofficial Final Text of EU AI Act Released

On 22 January 2024, an unofficial version of the (presumed) final EU Artificial Intelligence Act (“AI Act”) was released. The AI Act reached political agreement early December 2023 (see our blog post here) and had undergone technical discussions to finalize the text since. It was reported that the document was shared with EU Member State Representatives on 21 January 2024, ahead of a discussion within the Telecom Working Party, a technical body of the EU Council on 24 January 2024, and that formal adoption at the EU Member State ambassador level (i.e. COREPER) will likely follow on 2 February. On Friday 26 January 2024, the Belgian Presidency of the Council officially shared the (analysis of the) final compromise text of the AI Act with Member State representatives – clearly indicating that this text will be put forward for adoption.

(more…)

Preparing for the EU AI Act: Part 2

Join Sidley and OneTrust DataGuidance for a reactionary webinar on the recently published, near-final text of the EU AI Act on February 5, 2024. This discussion with industry panelists will cover initial reactions to the text of the EU AI Act following finalization by EU legislators and examine the key points in the AI Act that businesses need to understand.

(more…)

EU Reaches Political Agreement on Cyber Resilience Act for Digital and Connected Products

On 30 November 2023, the EU reached political agreement on the Cyber Resilience Act (“CRA”), the first legislation globally to regulate cybersecurity for digital and connected products that are designed, developed, produced and made available on the EU market. The CRA was originally proposed by the European Commission in September 2022. Alongside the recently adopted Data Act, Digital Operational Resilience Act (“DORA”), Critical Entities Resilience Act (“CER”), Network and Information Systems Security 2 Directive (“NISD2”) and Data Governance Act, the CRA builds on the EU Data and Cyber Strategies, and complements upcoming certification schemes, such as the EU Cloud Services Scheme (“EUCS”) and the EU ICT Products Scheme (“EUCC”). It responds to an increase in cyber-attacks in the EU over the last few years – in particular the rise in software supply chain attacks which have tripled over the last year –as well as the significant rise in digital and connected products in daily life which magnifies the risk of such attacks.

(more…)