On April 13, 2023, the United States Department of Commerce National Telecommunication and Information Administration (“NTIA”) published a request for comment (“RFC”) seeking public input on Artificial Intelligence (“AI”) accountability. The RFC seeks to understand which measures—both self-regulatory and regulatory—have the capacity to ensure that AI systems are “legal, effective, ethical, safe, and otherwise trustworthy.” The RFC adopts a broad definition of “AI systems,” noting that they include all automated or algorithmic systems that generate predictions, recommendations, or decisions.
NTIA’s comment period arises amidst flurry of deliberation on the moral and social implications of advanced AI deployment. As a recent editorial in the Wall Street Journal by Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher posits, AI has the potential to herald not only an economic revolution, but an intellectual resolution, posing a variety of philosophical challenges not seen since the Age of Enlightenment. Responding to the moment, industry leaders like Microsoft’s Brad Smith have recognized that the dawn of the AI-age calls for multidisciplinary collaboration on responsible AI principles that nevertheless uphold the competitiveness and national security of the United States. And senator Chuck Schumer is spearheading a legislative effort to develop a flexible and resilient AI policy framework across the federal government with an eye towards making sure that the United States remains a leader in the space.
Others have been less optimistic that AI guardrails can developed in tandem with the drive for AI innovation. In an open letter that has garnered more than 20,000 signatures, a group of AI experts and industry executives, including Elon Musk and Steve Wozniak, publicly called for a pause on AI systems more powerful than GPT-4; editorial pages are increasingly being populated with calls to slow down the race to “God-like AI.” Google’s Chief Executive has gone as far as to call for a global regulatory AI framework akin to treaties used to regulate the nuclear arms race.
The RFC encapsulates many of themes at the heart of the debates surrounding the AI and poses a series of questions on a variety of topics related to AI accountability, including:
- AI assessments and audits. The core focus of the RFC is on what measures, if any, should be employed to certify, audit, or assess AI accountability. The NTIA is also interested in how audit or assessment results should be communicated to external stakeholders and where in the AI supply chain such accountability measures should coalesce.
- Identification of AI-related risk. The NTIA broadly identifies sources of AI-related harm in the RFC, including to worker and workplace health and safety, the health and safety of marginalized communities, the democratic process, and human autonomy. The RFC further questions whether accountability measures should be scoped depending on the risk posed by the AI system, and, if so, how should risk be calculated, and by whom?
- Existing frameworks. The NTIA is also interested in the kind of work that is already done on AI accountability, both in government and in the private sector, and seeks information regarding all laws that currently require AI audits and assessments. Additionally, the RFC seeks comment on what other frameworks, like human rights law or data privacy law, can or should be adopted in the context of AI accountability.
- Competitiveness and innovation. The RFC also seeks to understand what impact, if any, the imposition of accountability mechanisms may have on innovation and the competitiveness of U.S. AI-developers.
- The role of government. Finally, many questions are designed to seek input on the role of the federal government in the accountability ecosystem—is a federal law on AI appropriate and, if so, what should it look like, and what agency should be charged with enforcement? Can AI accountability practices have meaningful impact in the absence of legal standards?
The RFC clearly states that advancing trustworthy AI is an important federal objective. It builds on and references other AI governance publications issued during the Biden Administration, such as the White House Blueprint for an AI Bill of Rights and the National Institute of Standards and Technology (“NIST”) AI Risk Management Framework (“AI RMF”). The RFC notes that both of these voluntary frameworks “contemplate mechanisms to advance the trustworthiness of algorithmic technologies in particular contexts and practices.”
While there is no consensus over how AI might be regulated, emerging governance proposals have focused on fairness, transparency, and accountability. NTIA’s RFC builds on those trends and is one of the most significant actions taken by the federal government thus far. While the NTIA intends only to draft and issue a report on AI policy development, comments to the RFC may inform the trajectory of future legislation or regulation.
“Responsible AI systems could bring enormous benefits, but only if we address their potential consequences and harms. For these systems to reach their full potential, companies and consumers need to be able to trust them,” said Alan Davidson, Assistant Secretary of Commerce for Communications and Information and NTIA Administrator.
All written comments must be submitted by June 12, 2023.
This post is as of the posting date stated above. Sidley Austin LLP assumes no duty to update this post or post about any subsequent developments having a bearing on this post.