Globally, the rapid advancement of artificial intelligence (AI) and machine learning (ML) raises fundamental questions about how the technology can be used. Drug approval authorities are now also taking part in this discussion, resulting in emerging and evolving guidelines and principles for drug companies.
This Sidley Update looks at the emerging views of the EU, U.S., and the UK drug approval authorities.
In June 2023, the EMA published a draft reflection paper (AI Paper) setting out how the EMA expects both marketing authorization (MA) applicants and holders to use AI and ML. The AI Paper is open for public comments until December 31, 2023.
The AI Paper advocates a “human-centric approach” to guide the use of AI and ML, and it confirms the following three main principles:
- MA holders and applicants are responsible for ensuring that any AI or ML used to support the development of a drug complies with existing requirements (e.g., GxP standards on data and algorithm governance).
- MA holders and applicants are required to adopt a “risk-based approach” to assess the impact of AI and ML on the benefit-risk balance of the drug.
- AI presents great promise for enhancing all phases of the drug lifecycle, and EMA encourages companies, where appropriate, to start integrating data science competence within the lifecycle of drugs, from development and clinical trials to manufacturing and post-authorization, including pharmacovigilance.
For each drug lifecycle stage, the AI Paper provides high-level guidance on how AI systems can be integrated and regulated. It covers and provides general guidance for a) drug discovery, b) nonclinical development, c) clinical trials, d) product information, e) manufacturing, and f) the post-authorization phase.
For example, the AI Paper mentions that AI’s use in drug discovery is low risk for regulatory purposes, but if AI is used to generate evidence in support of an MA application or intended to be combined with the use of a drug, nonclinical development principles should be followed. AI/ML used in clinical trials should adhere to guidelines like good clinical practice, but other considerations may be relevant such as decentralized clinical trial considerations. The AI Paper also reminds that medical devices and in vitro diagnostics with AI/ML systems must comply with the applicable legal frameworks in addition to clinical trial-specific requirements. AI’s use in precision medicine is considered high risk, and the respective principles should therefore be considered in addition to specific risks related to posology and alternative treatments in case of technical failure. The document also addresses the use of AI in creating product information, manufacturing, and the post-authorization phase, emphasizing the importance of quality control and validation.
The AI Paper examines technical aspects of AI/ML. This includes datasets acquisition, their training and testing, performance assessment of the AI models, and their output. It also provides considerations for using AI models, including the applicability of principles of interpretability and explainability in managing AI models.
The AI Paper is aligned with the EU’s proposed AI Act (see our Sidley Update here), which among other aspects adopts a risk-based approach to the use of AI. Following the same line as the proposed AI Act, the AI Paper underlines the importance of an ethical approach to the use of AI in drug development, and it cites the EU’s ethics guidelines for trustworthy AI.
While the EMA’s draft AI Paper leaves many aspects still to be regulated by further guidance and standards, it provides some general considerations for how AI systems interact with the drug lifecycle, and it gives a first, official “go ahead” to companies to use AI in their EU drug development programs.
In May 2023, FDA published a discussion paper to promote dialogue regarding the application of AI/ML in the context of the drug development process. This discussion paper is not formal agency guidance but reflects FDA’s interest in soliciting feedback on the opportunities and challenges of using AI/ML in drug development and in the development of devices intended for use with drug products.
Although the public comment period ended on August 9, comments are available for review here. Three main topics are discussed:
- The landscape of current and potential uses of AI/ML includes, but is not limited to, drug discovery by reviewing large data sources to identify suitable biological targets; clinical trial recruitment using mined data from clinical databases and registries to identify suitable patients; review of real-world evidence to help with predictive modeling and endpoint analysis; and incorporating novel drug development tools (DDTs) as part of qualification programs leveraging digital health technologies that could aid in the analysis of biomarkers or clinical outcome assessments.
- FDA outlines some considerations for the use of AI/ML that include the introduction of risks and harms, such as the potential to increase errors and pre-existing biases in the underlying data sets, and the potential limited explainability that may prevent full transparency for users. This has resulted in developing standards related to “explainability, reliability, privacy, safety, security and bias mitigation.”
- FDA welcomes the opportunity to exchange ideas and engage with stakeholders on these issues. Within the paper, FDA includes a series of questions for feedback. There is also a planned workshop with stakeholders for further engagement on September 26-27, 2023 (here).
UK – Current Developments
The Medicines and Healthcare Products Regulatory Agency (MHRA) has yet to publish formal reflections on the use of AI in the drug lifecycle.
Though it remains to be seen whether MHRA will publish reflections similar to those of the EMA on the use of AI in the drug lifecycle, any such reflections would need to be consistent with the principles established in the UK’s AI White Paper, published by the Department for Science Innovation and Technology on March 29, 2023, which are examined in a previous Sidley Update. Such reflections or guidance would be consistent with the UK’s sector-specific, principles-based approach. While there are unique considerations for the use of AI in the medicinal product lifecycle, future MHRA publications on the subject are likely to overlap where appropriate with existing MHRA guidance prepared in conjunction with the U.S. FDA and Health Canada on Good Machine Learning Practice for Medical Device Development: Guiding Principles published in October 2021.
Most recently, on August 31, 2023, the House of Commons Science, Innovation and Technology Committee published an interim report on the governance of AI in the UK. The Committee advocate for a “tightly-focussed AI Bill” to be introduced in the next Parliament, which would give a legal basis for the regulators to pay due regard to the principles set out in the UK’s AI white paper.
This post is as of the posting date stated above. Sidley Austin LLP assumes no duty to update this post or post about any subsequent developments having a bearing on this post.