European Commission Publishes Ethics Guidelines for Trustworthy Artificial Intelligence

The High-Level Expert Group on Artificial Intelligence (“AI HLEG”), an independent expert group set up by the European Commission in June 2018 as part of its AI strategy, has published its final Ethics Guidelines for Trustworthy Artificial Intelligence (“AI”) (the “Guidelines”).

These Guidelines form part of a wider focus by the Commission on AI, with President-elect of the European Commission, Ursula von der Leyen commenting most recently on July 16, in her proposed political guidelines, that: “In my first 100 days in office, I will put forward legislation for a coordinated European approach on the human and ethical implications of Artificial Intelligence…”.

The AI HLEG appreciates that AI has the potential to benefit a wide range of sectors and has a wide variety of uses. However, it also acknowledges that the use of AI also brings new challenges and raises various legal and ethical questions. It is with this in mind that the Guidelines have been developed: with a view to providing a framework to achieve and operationalize Trustworthy AI. In particular, an underlying theme of these Guidelines appears to be balance and addressing potential conflicts between the various requirements.

The Guidelines state that Trustworthy AI has three components: (1) it should be lawful, complying with all applicable laws and regulations; (2) it should be ethical, ensuring adherence to ethical principles and values; and (3) it should be robust, both from a technical and social perspective, given that even with good intentions, AI systems can cause unintentional harm. The Guidelines are clear that these components should be met throughout the system’s entire life cycle and provide guidance largely in relation to the second and third components: fostering and securing ethical and robust AI.

The Guidelines identify the ethical principles and their correlated values that must be respected in the development, deployment and use of AI systems. For example, the Guidelines make clear that AI systems should be developed, deployed and used in a way that adheres to the ethical principles of: respect for human autonomy, prevention of harm, fairness and explicability. The Guidelines emphasize that attention should be paid to the underlying tensions behind these principles and highlight that particular attention should be given to vulnerable groups and their interactions with AI.

Based on these fundamental rights and ethics principles, the Guidelines then provide seven key requirements that AI systems should meet in order to be trustworthy and enable Trustworthy AI to be realised:

  1. Human agency and oversight. AI systems should support human agency and fundamental rights, and not decrease, limit or misguide human autonomy. This will require proper oversight mechanisms including human-in-the-loop (i.e., the capability for human intervention in every decision cycle of the system), human-on-the-loop (i.e., the capability for human intervention during the design cycle of the system and monitoring the systems operations), and human-in-command (i.e., the ability to oversee the overall activity of the AI system (including its broader impacts) and the ability to decide when and how to use the system in any particular situation) approaches.
  2. Technical robustness and safety. Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all phases of AI systems. This will include ensuring there is a fall back plan in case something goes wrong as well as ensuring systems are accurate, reliable and reproducible.
  3. Privacy and data governance. Individuals should have full control over their own data. Data concerning them will not be used to harm or discriminate against them.
  4. Transparency. The traceability of AI systems should be ensured. The Guidelines are clear that humans need to be aware that they are interacting with an AI system, and must be informed of the system’s capabilities and limitations.
  5. Diversity, non-discrimination and fairness. AI systems should consider the whole range of human abilities, skills and requirements, and should be available to all. Unfair bias should be avoided, as it could have multiple negative implications including the marginalization of vulnerable groups.
  6. Societal and environmental well-being. AI systems should benefit all human beings and must be sustainable and environmentally friendly.
  7. Accountability. Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes. A key element of this will be the auditability of AI systems and adequate and accessible redress.

These requirements should be applied during the development, deployment and use of AI and the Guidelines consider that technical and non-technical methods can ensure the implementation of the requirements. Again, the Guidelines are clear that you should be mindful that there might be fundamental tensions between different principles and requirements. This will require the continuous identification, evaluation, documentation and communication about these trade-offs and their solutions.

Finally, the Guidelines set out a Trustworthy AI assessment list reflecting the seven requirements set out above which is intended to operationalize those key requirements. The list is non-exhaustive and intended to apply in a flexible manner depending on the AI use in question.

Looking forward, a forum to exchange best practices for the implementation of Trustworthy AI has been created and the Guidelines also present an assessment list that offers guidance on each of the above requirement’s practical implementation. This assessment list will undergo a piloting process to which interested stakeholders can participate and provide feedback. The AI HLEG and the European Commission anticipate that the principles of these Guidelines will reach beyond Europe to a global level.