2023 is rapidly becoming the year of AI policy and regulation. A particular focus of regulatory concern relates to AI impacts on employees, and the U.S. Equal Employment Opportunity Commission (EEOC) is not sitting on the sidelines. On January 31, 2023, the EEOC held a public hearing to examine the use of automated systems, including artificial intelligence (AI), in employment decisions. This hearing, titled “Navigating Employment Discrimination in AI and Automated Systems: A New Civil Rights Frontier,” continues the work of the Artificial Intelligence and Algorithmic Fairness Initiative, which was launched in 2021 by the EEOC. Through this initiative, the EEOC has already published a guidance titled “The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees.” Below are a few high-level takeaways from the hearing:
- Auditing: Many panelists discussed the importance of auditing AI tools, and considered what role such audits should have, including whether such audits should be recommended or required, and whether they should be through a third-party, self-conducted, or provided by the EEOC. Many panelists stressed the importance of frequent audits, as algorithmic tests are constantly changing (or “learning”).
- Considerations surrounding data used for AI and training: Several panelists discussed the importance of data in AI, including considerations related to the source, type, and amount of data collected and used by the AI system.
- Explainability and Transparency: Panelists stressed the importance of transparency and debated how much knowledge individuals should have concerning the tools run by AI. Many agreed that at a minimum, applicants should be informed of the use of AI hiring systems and provided with disclosures sufficient to understand whether discrimination has occurred.
- Existing EEOC Law: Some panelists questioned whether traditional EEOC laws could be effectively used in the case of AI. For instance, certain panelists questioned whether the EEOC’s Uniform Guidelines on Employee Selection Procedures (the four-fifths rule) could be effectively used to combat employment discrimination in the case of AI. Others recommended the EEOC release guidance on how the Uniform Guidelines on Employee Selection Procedures should govern the use of variables in algorithmic hiring and the design of automated hiring systems. Still, some panelists suggested EEOC review and incorporate existing AI frameworks, such as the National Institutes of Science and Technology’s Artificial Intelligence Risk Management Framework, the draft EU AI Act, and the Center for Democracy and Technology’s Civil Rights Standards for 21st Century Employment Selection Procedures. These suggestions were coupled with other comments from panelists that EEOC should increase its pursuit of enforcement actions to increase accountability from employers and vendors.
- Collaboration with Exiting Laws and Guidance: Several panelists suggested coordination with the federal government and agencies, as well as states to solve these issues. For instance, reference was made to how the EEOC can work within the Blueprint for an AI Bill of Rights, as well as how it can collaborate with the Federal Trade Commission and New York City’s Local Law Int. No. 144 which regulates the use of automated employment decision tools. Other panelists referenced the Illinois Artificial Intelligence Video Interview Act, California’s Fair Employment and Housing Council’s proposed regulations on Automated Decision Systems, and other similar laws.
Written testimony from the panelists can be found here.
This post is as of the posting date stated above. Sidley Austin LLP assumes no duty to update this post or post about any subsequent developments having a bearing on this post.