NYC Automated Decision-Making Task Force Forum Provides Insight Into Broader Efforts to Regulate Artificial Intelligence
More and more entities are deploying machine learning and artificial intelligence to automate tasks previously performed by humans. Such efforts carry with them real benefits, such as the enhancement of operational efficiency and the reduction of costs, but they also raise a number of concerns regarding their potential impacts on human society, particularly as computer algorithms are increasingly used to determine important outcomes like individuals’ treatment within the criminal justice system.
This mixture of benefits and concerns is starting to attract the interest of regulators. Efforts in the European Union, Canada, and the United States have initiated an ongoing discussion around how to regulate “automated decision-making” and what principles should guide it. And while not all of these regulatory efforts will directly implicate private companies, they may nonetheless provide insight for companies seeking to build consumer trust in their artificial intelligence systems or better prepare themselves for the overall direction that regulation is taking.
It was against this backdrop that New York City’s (“NYC’s”) Automated Decisions Systems (“ADS”) Task Force hosted its first public forum on April 30, 2019. The ADS Task Force was created in January 2018, when New York City became the first U.S. jurisdiction to pass a law seeking to develop a set of principles around automated decision-making and profiling of personal data. The Task Force will provide the Mayor with recommendations for how city agencies should approach ADS later this year.
Artificial Decision-Making: The Regulatory Backdrop
As noted above, regulators are starting to take action given emerging concerns that certain types of ADS may produce negative social outcomes if they are not designed properly.
The most prominent such regulation is the European Union’s recently enacted General Data Protection Regulation (“GDPR”). Article 22 of the GDPR states that, with some exceptions, data subjects “shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.” The GDPR came into force last year, and certain European Data Protection Authorities (“DPAs”) are already beginning to take action under Article 22. For instance, on April 27, 2019, the Finnish DPA ordered a company to modify its automated decision practices for assessing creditworthiness under GDPR Article 22.
Around the same time, the United Kingdom’s DPA issued guidance encouraging companies to perform “meaningful” human reviews of ADS to ensure compliance with Article 22.
Legislation on automated decision-making has also already made its way across the pond. Canada enacted a Directive on Automated Decision-Making, which requires, among other things, algorithmic impact assessments and transparency about ADS systems, and U.S. jurisdictions are starting to get into the act as well. In particular, the “Algorithmic Accountability Act,” which was introduced in the U.S. Congress on April 10, 2019, would seek to reduce bias and ensure fairness in ADS by imposing obligations on companies that utilize such systems, and there was an efforts to enact ADS legislation in the state of Washington earlier this year, with the sponsor of the bill promising to renew the legislation in the 2020.
As noted above, the first U.S. jurisdiction to actually enact an ADS law was New York City. NYC’s Local Law 49 created a Task Force to recommend procedures and evaluation criteria for NYC agencies that utilize ADS to determine outcomes that affect the public. The Task Force has received international attention and its membership draws from a diverse array of city agencies, universities, and civil rights organizations. And, as noted above, it recently hosted its first forum, which may provide some insight into the issues occupying policymakers’ attention in these early days of ADS regulation, with two areas of focus standing out.
NYC Forum Focus Area #1: Definition and Scope
Local Law 49 provides the following definition:
The term “automated decision system” means computerized implementations of algorithms, including those derived from machine learning or other data processing or artificial intelligence techniques, which are used to make or assist in making decisions.
Despite the relatively broad definition specified in Local Law 49, discussions at the ADS Task Force forum indicated a desire on the part of policymakers to focus on systems with the greatest risk of producing unwanted bias, disparate impact to certain groups, or other economic harm. In his remarks, Chair Jeff Thamkittikasem suggested that while the literal definition of ADS could include a system “as simple as a pocket calculator,” the ADS Task Force was concerned primarily with the most complex systems whose decisions would have the greatest impact on an individual’s job prospects, financial outcomes, or similar opportunities. As the ADS Task Force is concerned with city agencies, Thamkittikasem pointed to systems that decide middle school placements as a type of ADS that could warrant scrutiny.
The analogous regulatory efforts mentioned previously paint a similar picture. Article 22 of the GDPR regulates ADS that makes decisions that “produc[e] legal effects concerning him or her or similarly significantly affects him or her.” Moreover, recent guidance issued by the United Kingdom’s DPA states that two primary factors that the agency would consider in its GDPR Article 22 evaluation of an organization’s ADS are the extent to which a system (i) poses a risk of automation bias or (ii) is too complex to be interpreted by a human reviewer. The Algorithmic Accountability Act similarly focuses on a subset of ADS. While the bill contains a broad definition much like Local Law 49, it provides special protections to “high risk automated decision systems,” which are those that pose a significant risk to individuals, due to their potential for compromising personal data, causing unfair or discriminatory outcomes, or resulting in similar negative outcomes.
NYC Forum Focus Area #2: Factors to Consider in ADS Impact Assessments
Discussions at the ADS Task Force Forum also provided insight into the steps that could be taken by organizations to implement ADS securely and effectively. Andrew Nicklin, an advisor to the ADS Task Force from Johns Hopkins University, suggested that agencies should consider multiple factors when assessing the risk of an ADS. These factors include:
- Whether the proposed system would be “continual and operationalized” or would be utilized on a “one-off” basis;
- The number of data subjects who would potentially be affected by the system;
- The set of negative impacts that could result from the system (e.g. loss of job opportunity, negative credit impacts, etc.); and
- The duration of these negative impacts
Nicklin also stressed the need for gatekeeping mechanisms and ongoing evaluation methods to maintain ADS, including periodic ADS assessments, boilerplate language to be included in material agreements, regular education and training of employees, and the use of external audits and recommendations. Notably, impact assessments are required for ADS under Article 35(3)(a) of the GDPR and would also be required for “high risk automated decision systems” under the current version of the Algorithmic Accountability Act.
Conclusion: Key Takeaways
The next public forum of the ADS Task Force will be on May 30, 2019 and will focus in particular on mechanisms to ensure transparency in the use and deployment of ADS. Companies seeking to stay “ahead of the curve” should remain cognizant of these (and other) regulatory efforts and the ways in which their activities could be subject to GDPR Article 22 scrutiny or other future regulatory action around ADS. Moreover, in light of the regulatory activity to date and the growing body of guidance that has arisen in this subject area, stakeholders seeking to be proactive could begin by “taking stock” of processes within their organization that could constitute ADS under emerging regulations. Such efforts could include the creation of a clearly defined inventory of such systems as well as the decisions they produce and the criteria utilized. Particularly if they are subject to the GDPR, organizations could also consider integrating a dedicated ADS evaluation procedure into their established risk assessment frameworks. Finally, as discussions continue and principles become more established, companies should continue to assess how such principles align with their own goals and objectives around the use of ADS.