The High-Level Expert Group on Artificial Intelligence (“AI HLEG”), an independent expert group set up by the European Commission in June 2018 as part of its AI strategy, has published its final Ethics Guidelines for Trustworthy Artificial Intelligence (“AI”) (the “Guidelines”).
These Guidelines form part of a wider focus by the Commission on AI, with President-elect of the European Commission, Ursula von der Leyen commenting most recently on July 16, in her proposed political guidelines, that: “In my first 100 days in office, I will put forward legislation for a coordinated European approach on the human and ethical implications of Artificial Intelligence…”.
On June 20, 2019, the Federal Energy Regulatory Commission (“FERC”) approved a North American Electric Reliability Corp. (“NERC”) petition to adopt Reliability Standard CIP-008-6 to strengthen the reporting requirements for attempts to compromise the operation of the United States’ bulk electric system. The prior Critical Infrastructure Protection (“CIP”) Reliability Standards only required reporting where an incident compromised or disrupted one or more reliability tasks. The new standard applies to all registered entities subject to the CIP Reliability Standards.
The Chinese government is proposing heightened requirements on cross-border transfers of personal information from China, recently publishing draft Measures on Security Assessment of Cross-border Transfer of Personal Information (the “Draft Measures”). This comes less than a month after the Chinese government issued another draft Measures for Data Security Management which require network operators to conduct a security assessment for any transfer of important data (i.e. any data that may directly affect China’s national security, economic security, social stability, or public health and security if leaked) to overseas. The Draft Measures now focus on the cross-border transfer of personal information by network operators and are viewed as a continuous effect of the Chinese government to strengthen the data protection in China.
In a very significant FOIA decision for business, Food Mktg. Inst. v. Argus Leader Media, decided on June 24, 2019, the Supreme Court reversed 45 years of understanding that Exemption 4 only protects confidential business information whose disclosure by the government would cause “substantial competitive harm.”
Relying on the plain meaning of words in the statute – rather than what the Court majority characterized as muddled legislative history – the Court found that the D.C. Circuit had engrafted a condition on the Exemption that is not supported by the text. Rather, so long as the commercial or financial information obtained by the government is “private” or “secret” – the plain and ordinary meaning of “confidential” – it may be withheld from disclosure under FOIA.
Data aggregators and fintech providers are now offering services that let consumers manage their finances using information from multiple accounts at multiple financial institutions. This kind of consumer data access raises serious questions about the relationship between financial institutions and consumer-designated third parties. This webinar will cover the risks that come with consumer-permissioned information sharing, current gaps and solutions in the existing legal framework to address these risks and issues that can be addressed contractually between various stakeholders.
Sidley has consolidated its materials and resources on the CCPA, including an amendment tracker, on the new Sidley CCPA Monitor.
Explore the law and Sidley insights, available now.
More and more entities are deploying machine learning and artificial intelligence to automate tasks previously performed by humans. Such efforts carry with them real benefits, such as the enhancement of operational efficiency and the reduction of costs, but they also raise a number of concerns regarding their potential impacts on human society, particularly as computer algorithms are increasingly used to determine important outcomes like individuals’ treatment within the criminal justice system.
This mixture of benefits and concerns is starting to attract the interest of regulators. Efforts in the European Union, Canada, and the United States have initiated an ongoing discussion around how to regulate “automated decision-making” and what principles should guide it. And while not all of these regulatory efforts will directly implicate private companies, they may nonetheless provide insight for companies seeking to build consumer trust in their artificial intelligence systems or better prepare themselves for the overall direction that regulation is taking.
On May 15, 2019, President Donald Trump signed an executive order (EO) declaring a “national emergency” related to certain threats against information and communications technology and services (ICTS) in the United States and authorizing the Department of Commerce to block transactions that involve ICTS with a “foreign adversary.” The EO provides for the possibility of a licensing regime that could allow transactions that would otherwise be blocked. The EO is available here.
The EO itself does not mention any particular countries or companies that would be subject to its prohibitions. However, the EO is widely reported to be aimed at China. Indeed, tensions between the United States and China have intensified over the past week, after negotiations between the two governments to resolve their trade dispute stalled.
Recently, the Dutch Supervisory Authority (the “Autoriteit Persoonsgegevens” or “Dutch SA”) has taken the position that the use of so-called “cookie walls,” whereby website access is made conditional upon the provision of consent to tracking cookies, is not compliant with the EU General Data Protection Regulation (“GDPR”).
The National Association of Insurance Commissioners (NAIC) held its Spring 2019 National Meeting (Spring Meeting) in Orlando, Florida, from April 6 to 9, 2019. This post summarizes the highlights from this meeting.