EIOPA Publishes Consultation on Opinion on AI Governance and Risk Management

On February 12, 2025, the European Insurance and Occupational Pensions Authority (“EIOPA”) published a consultation on its draft opinion on artificial intelligence (“AI”) governance and risk management (the “Opinion”).

The Opinion is addressed to supervisory authorities and covers the activities of both insurance undertakings and intermediaries (hereafter jointly referred to as “Undertakings”), insofar as they may use AI systems in the insurance value chain.

The Opinion aims to clarify the main principles and requirements in insurance sectoral legislation for insurance AI systems that are neither prohibited nor deemed high-risk under Regulation (EU) 2024/1689 (the “AI Act”). The Opinion provides guidance on how to apply insurance sectoral legislation to AI systems that were not common or available when the AI Act was passed. The approach sets high-level supervisory expectations for the governance and risk management principles that undertakings should follow to use AI systems responsibly, taking into account the risks and proportionality of each case.

A summary of the key points from the Opinion are as follows:

  • The Opinion posits that Undertakings should assess the risk of AI in its various use cases (noting there are varying levels of risks amongst those AI use cases that are not prohibited or considered as high-risk under the AI Act). As part of this assessment, Undertakings should consider their risk and develop governance and risk management measures taking into account the following:
    • processing data on a large scale;
    • the sensitivity of the data;
    • the extent to which the AI system can act autonomously;
    • the potential impact an AI system may have on the right to non-discrimination;
    • the extent to which an AI system is used in a line of business that is important for the financial inclusion of customers or if it is compulsory by law; and
    • the extent an AI system is used in critical activities that can impact the business continuity of an Undertaking.
  • The Opinion further explains that following the assessment, Undertakings should develop proportionate measures to ensure the responsible use of AI. This implies that governance and risk management measures may be tailored to the specific AI use case to achieve the desired outcome.
  • In line with Art. 41 of the European Directive 2009/138/EC, Art. 25 of the Insurance Distribution Directive (“IDD”) and Arts. 4,5 and 6 of the Digital Operational Resilience Act, Undertakings should develop proportionate governance and risk management systems, with particular focus on the following:
    • fairness and ethics;
    • data governance;
    • documentation and record keeping;
    • transparency and explainability;
    • human oversight; and
    • accuracy, robustness, and cybersecurity.
  • The Opinion recommends that Undertakings using AI systems define and document in a policy (e.g., IT strategy, data strategy or a specific AI strategy) the approach to the use of AI within the organisation. This approach should be regularly reviewed.
  • Undertakings are also recommended to implement accountability frameworks, regardless of whether the AI system was developed in-house or by third parties.
  • The EIOPA encourages a customer-centric approach to AI governance to ensure that customers are treated fairly and according to their best interest in line with Art. 17 of the IDD and EIOPA’s 2023 Supervisory Statement on Differential Pricing Practices. This includes developing a culture that includes ethics and fairness guidance and trainings.
  • The Opinion includes a focus on data used to train AI, explaining that it must be complete, accurate and free of bias and the outputs of AI systems should be meaningfully explainable to identify and mitigate potential bias. The outcomes should be regularly monitored and audited. It also states that risk management systems should have a data governance policy, as well as policies addressing the sufficiency and quality of relevant data for underwriting and reserving processes, as well.
  • Aligning with several other governance frameworks, the Opinion emphasizes that adequate redress mechanisms should be in place to allow customers to seek redress when negatively affected by an AI system.
  • It further emphasizes that appropriate records should be kept on the training/testing data used and modelling methodologies to ensure reproducibility and traceability.
  • The EIOPA further explains that measures should be adopted to ensure the outcomes of AI systems can be meaningfully explained. Such explanations being adapted for the relevant AI use case, and the recipient stakeholders.
  • The Opinion also highlights internal controls for an effective compliance and risk management program, including:
    • administrative, management or supervisory body members responsible for the use of AI systems in the organisation, with such members having sufficient knowledge of its use and potential risks to the organisation;
    • compliance and audit functions to ensure compliance with applicable laws and regulations;
    • a data protection officer to ensure that all data processed by AI systems is in compliance with applicable data protection rules along with the potential creation of an AI officer role; and
    • sufficient training to staff to ensure effective human oversight of the AI systems.
  • The AI system should perform consistently throughout its lifecycle with respect to its levels of accuracy, robustness, and cybersecurity (proportionate to its use case). The Opinion recommends that metrics be used to measure the performance of the AI system, and that such systems be resilient to unauthorised third parties attempting to alter their use, outputs, or performance.

The Opinion does not propose any additional legislation or amendments to existing law. Rather, the Opinion makes clear that the EIOPA merely seeks to provide insurance sector-specific guidance on the operation of AI systems under existing EU statutes, applying a principle-based approach. It should be noted that EIOPA is engaging with the European Commission’s AI Office and additional commentary may be published in the future.

Responses to the Opinion must be submitted by May 12, 2025. EIOPA will then consider the feedback received and revise the Opinion accordingly.

This post is as of the posting date stated above. Sidley Austin LLP assumes no duty to update this post or post about any subsequent developments having a bearing on this post.