On June 10, the Financial Industry Regulatory Authority (FINRA) released its Artificial Intelligence (AI) in the Securities Industry Report (Report), a culmination of a two-year review by FINRA’s Office of Financial Innovation to learn about the emerging challenges confronted by broker-dealers (Firms) and other market participants as they introduce AI-based applications into their businesses. The Report provides an overview of AI technology, explores its diverse, multifaceted applications in the securities industry and identifies the challenges and legal considerations with leveraging this technology. FINRA requests industry feedback on topics covered in the Report by August 31, 2020.
On 19 February 2020, the European Commission published a white paper on the use of artificial intelligence (“AI”) in the EU (the “White Paper”). The White Paper forms part of the Commission President, Ursula Von der Leyen’s, digital strategy, one of the key pillars of her administration’s five year tenure, recognising that the EU has fallen behind the US and China with respect to the strategic deployment of AI. To tackle this problem, the Commission proposes a common EU approach to ‘speed up the uptake’ of AI in the EU, whilst also tackling the human and ethical implications of AI’s fast growing use in the EU, including the possible downsides of its use, such as opaque decision making and hidden, embedded gender and racial discrimination. In order to achieve a common EU approach to AI, and to create “trustworthy” AI that can rival developments in the US and China, the Commission proposes the creation of a regulatory framework for AI.
More and more entities are deploying machine learning and artificial intelligence to automate tasks previously performed by humans. Such efforts carry with them real benefits, such as the enhancement of operational efficiency and the reduction of costs, but they also raise a number of concerns regarding their potential impacts on human society, particularly as computer algorithms are increasingly used to determine important outcomes like individuals’ treatment within the criminal justice system.
This mixture of benefits and concerns is starting to attract the interest of regulators. Efforts in the European Union, Canada, and the United States have initiated an ongoing discussion around how to regulate “automated decision-making” and what principles should guide it. And while not all of these regulatory efforts will directly implicate private companies, they may nonetheless provide insight for companies seeking to build consumer trust in their artificial intelligence systems or better prepare themselves for the overall direction that regulation is taking.
On January 18, 2019, the New York State Department of Financial Services (NYDFS) issued Circular Letter 2019-1 (the Circular Letter), addressing insurers’ use of external consumer data and information sources in underwriting for life insurance. The Circular Letter follows an investigation commenced by NYDFS regarding life insurers’ use of external data, which was initiated in light of reports that insurers were using algorithms and predictive models that include unconventional sources or types of external data. Among other things, the Circular Letter provides guidance that when insurers use external data sources in connection with underwriting decisions, (1) the use of external data sources must not result in any unlawful discrimination, (2) the underwriting or rating guidelines must be based on sound actuarial principle; and (3) life insurers must have adequate consumer disclosures to notify insureds or potential insureds of the right to receive the specific reasons for any adverse underwriting decision based on such data. (more…)
The U.S. Department of Commerce, Bureau of Industry and Security (BIS) has published an advance notice of proposed rulemaking (ANPRM) initiating a 30-day public comment process regarding export controls for certain emerging technologies. The notice launches the implementation of a key provision of the Export Control Reform Act of 2018 (ECRA), part of the National Defense Authorization Act for fiscal year 2019 (NDAA). In the ECRA, Congress authorized BIS to establish controls on the export, reexport and transfer (in country) of “emerging and foundational technologies.” The ANPRM, including a list of the 14 proposed representative technology categories and subcategories subject to review, can be found here. Our prior updates on the NDAA and ECRA can be found here.
On August 7, a group of regulators from 11 jurisdictions published a consultation (the Consultation) on the Global Financial Innovation Network (the GFIN), which aims to promote international cooperation on innovation and the use of technology in financial services (FinTech) and in regulatory processes (RegTech).
The group — which includes the U.S. Consumer Financial Protection Bureau, the UK Financial Conduct Authority (the FCA), the Hong Kong Monetary Authority (HKMA) and the Monetary Authority of Singapore (MAS) — is one of the first major collaborative efforts on FinTech and RegTech issues among regulators in developed financial services markets. The Consultation builds on the FCA’s proposal earlier this year to create a “global sandbox” for innovative financial services firms.
This post summarizes the proposed role of the GFIN, the issues on which its founding regulators are consulting and how these may affect financial services firms.
This past year was marked by ever more significant data breaches, growing cybersecurity regulatory requirements at the state and federal levels and continued challenges in harmonizing international privacy and cybersecurity regulations. We expect each of these trends to continue in 2018.
As we begin this New Year, here is list of the top 10 privacy and cybersecurity issues for 2018: (more…)
Singapore’s Personal Data Protection Commission (PDPC) has launched a public consultation into a proposed revision to the law that would require reporting of certain data breaches. Singapore currently uses a voluntary approach to data breach notifications, but, according to the PDPC, this has resulted in uneven notification practices. Under the proposals, it will be mandatory for organizations to inform customers of personal data breaches that pose any risk of impact or harm to the affected individual as soon as they are discovered. If an incident involves 500 or more individuals, organizations will need to notify the PDPC as soon as possible but no later than 72 hours after discovery of the breach. The proposals aim to allow individuals to take steps to protect their interests in the event of a data breach, for example, by changing their password. (more…)