These informal video chats, moderated by Sidley partner Alan Raul, are designed to help fill the COVID-19 induced privacy discussion drought. We look forward to hearing what is on the mind of key data protection and cybersecurity thought leaders from both public and private sectors. Each chat will be relatively brief, leaving some time to address participant questions via our virtual space. Please feel free to suggest any topics you would be interested to hear addressed by contacting email@example.com.
The last two weeks have brought two important (although unrelated) rulings on the TCPA’s Autodialer Restrictions. First, on June 25, the Federal Communications Commission limited the applicability of the autodialer restrictions in the Telephone Consumer Protection Act, 47 U.S.C. § 227 (the “TCPA”), to an emerging texting technology. Second, less than two weeks later, the Supreme Court ruled that an exception to the TCPA’s autodialer restrictions for calls to collect federal debts was unconstitutional and expanded the statute’s reach.
On June 24, 2020, the New York State Department of Financial Service (NYDFS) announced a series of virtual currency initiatives aimed at providing additional opportunities and clarity for BitLicense and limited-purpose trust company applicants and licensees. These initiatives include:
- A proposed framework for obtaining a conditional BitLicense when partnering with an existing licensee
- A proposed approach for NYDFS pre-approval of certain virtual currencies and a licensee’s ability to self-certify the use of new virtual currencies
- New procedures aimed at creating a more transparent and timely process for reviewing BitLicense applications
- A BitLicense FAQ page
The NYDFS’s press announcement stated that these initiatives were developed based on feedback from the industry to make it easier for virtual currency companies to successfully operate in New York. If the stated intent is achieved, these initiatives will be a welcome change for virtual currency businesses, which have often faced long timelines and a burdensome review process when submitting a BitLicense application or attempting to expand their approved activities. It remains to be seen, however, whether those objectives can be met.
The California Privacy Rights Act (CPRA), a proposed initiative to codify far-reaching amendments to the California Consumer Privacy Act (CCPA) and sometimes referred to as “CCPA 2.0”, is back in play and heading to the November 2020 ballot. A series of dramatic procedural twists and turns culminated with initiative backers successfully obtaining a writ of mandate directing the Secretary of State to direct counties to verify signatures for the ballot proposal by the June 25th Constitutional deadline. This verification involved each county conducting a random sample of the more than 800,000 signatures that proponents had submitted to place the initiative on the ballot.
Before the California court’s ruling, observers were skeptical that signatures could be verified before the deadline. Initiative proponents were almost two weeks behind the recommended schedule when they delivered signatures to be verified by California’s 58 counties. This meant counties had until June 26th to verify signatures — a day after the June 25th Constitutional deadline. Experience with other initiatives this year had shown that several large counties were waiting until the deadline to complete verifications, so proponents petitioned the court to push the deadline up by a day in order to meet the Constitutional deadline. The court agreed to do so, finding good cause existed to force counties to complete verifications a day early. And, as it happened, the extra time was not needed, as counties finished the count two days ahead of their initial deadline.
On June 10, the Financial Industry Regulatory Authority (FINRA) released its Artificial Intelligence (AI) in the Securities Industry Report (Report), a culmination of a two-year review by FINRA’s Office of Financial Innovation to learn about the emerging challenges confronted by broker-dealers (Firms) and other market participants as they introduce AI-based applications into their businesses. The Report provides an overview of AI technology, explores its diverse, multifaceted applications in the securities industry and identifies the challenges and legal considerations with leveraging this technology. FINRA requests industry feedback on topics covered in the Report by August 31, 2020.
*Article first appeared in The Hill on June 13, 2020.
Concerns over the use of location tracking and contact tracing of infected individuals to help mitigate the spread of COVID-19 have once again placed “privacy” at the forefront of public attention. And even though Congress declared privacy to be a fundamental right in 1974, it established no cabinet office or institutional framework to focus on the role of data protection and digital technology in our society. Consequently, during these days of COVID-19, there is no senior government official responsible for taking account of and balancing the trade-offs between privacy and public health.
Insider trading and the potential misuse of material nonpublic information (MNPI) have long been areas of intense focus of the U.S. Securities and Exchange Commission’s (the SEC) examination and enforcement programs. Recent SEC actions reflect a trend toward increased scrutiny of the potential for investment advisers to receive — and possibly to misuse — MNPI as a result of frequent interactions with the issuers in their investment portfolios, even where there is no evidence of misuse. Even in instances where the SEC does not allege that insider trading actually occurred, these actions reflect that investment advisers may face challenging regulatory examinations, enforcement actions and civil money penalties if the SEC alleges that an investment adviser’s policies and procedures were not adequately and effectively designed, implemented and enforced to address the potential for such misconduct. Accordingly, we suggest best practices with respect to the design and implementation of policies and procedures relating to the treatment of MNPI.
On June 1, 2020, the Criminal Division of the U.S. Department of Justice (DOJ) publicized an updated version of its “Evaluation of Corporate Compliance Program” guidance. This is the third version of the document, with the DOJ having issued the guidance in 2017 (which we analyzed here) and revised it in April 2019 (which we analyzed here). This further revision is another reminder of the DOJ’s heightened focus and increasing sophistication regarding evaluating compliance programs during investigations. While the overall structure of the guidance generally remains consistent with the last version, the revisions provide additional insight into the DOJ’s expectations for corporate compliance programs. More specifically, the revisions highlight the importance of an adequately resourced and empowered compliance department, a constantly evolving compliance program based on the company’s current risk profile and relevant compliance issues, and the use of key compliance metrics to test the effectiveness of a compliance program.
On June 1, 2020, California’s Office of the Attorney General (“AG”) moved one step closer to finalizing the California Consumer Privacy Act (“CCPA”) regulations when the AG submitted proposed final regulations for review and approval by California’s Office of Administrative Law (“OAL”). This submission signals the end of the AG’s CCPA regulation drafting process that began in early 2019. If the OAL approves the proposed final regulations, they will be finalized and enforceable by the AG, subject to any legal challenges.
On 19 February 2020, the European Commission published a white paper on the use of artificial intelligence (“AI”) in the EU (the “White Paper”). The White Paper forms part of the Commission President, Ursula Von der Leyen’s, digital strategy, one of the key pillars of her administration’s five year tenure, recognising that the EU has fallen behind the US and China with respect to the strategic deployment of AI. To tackle this problem, the Commission proposes a common EU approach to ‘speed up the uptake’ of AI in the EU, whilst also tackling the human and ethical implications of AI’s fast growing use in the EU, including the possible downsides of its use, such as opaque decision making and hidden, embedded gender and racial discrimination. In order to achieve a common EU approach to AI, and to create “trustworthy” AI that can rival developments in the US and China, the Commission proposes the creation of a regulatory framework for AI.