Nick Armstrong, Senior Director of Digital Enablement at CAI and co-chair of the ISPE Community of Practice on AI, specializes in the intersection of artificial intelligence and regulated industries. With a passion for driving innovation while maintaining compliance, Nick brings valuable insights into how cutting-edge technologies can transform pharmaceutical processes. In the following blog, Nick delves into the FDA’s new framework for AI credibility, providing a thoughtful analysis of its implications for building trust in AI and ensuring its responsible adoption in the pharmaceutical industry.
The pharmaceutical industry is no stranger to innovation. For decades, new technologies have transformed how we develop, manufacture, and deliver life-saving drugs. Now, artificial intelligence (AI) is at the forefront of this transformation, offering unprecedented opportunities to improve efficiency, accuracy, and insight across the drug product lifecycle. But as with any powerful tool, the promise of AI comes with challenges—chief among them, ensuring that these models are trustworthy and fit for purpose in a highly regulated environment.
The FDA’s new draft guidance, Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products, provides a blueprint for navigating this challenge. At its core is a risk-based credibility assessment framework designed to help sponsors establish and evaluate the credibility of AI models used to support regulatory decisions. This framework doesn’t just lay the groundwork for compliance; it sets the stage for building trust in AI’s role in safeguarding patient safety, drug quality, and efficacy.
Building Credibility, Step by Step
The framework is anchored in a seven-step process, which I see as a clear signal from the FDA: AI is welcome, but it must be accountable. It starts with defining the question of interest and the context of use (COU). This is more than just clarifying what the AI model will do—it’s about aligning the model’s purpose with regulatory expectations and ensuring it serves as a reliable piece of the decision-making puzzle.
Next comes assessing model risk. This is where the framework demonstrates its pragmatism, focusing on two key factors: model influence (how much weight the AI output carries in the overall decision) and decision consequence (the impact of a wrong decision). For example, an AI model used to predict patient eligibility for outpatient monitoring in clinical trials will likely carry higher risk than one used for visual inspection of manufacturing fill levels, simply because the stakes for patient safety are so much higher.
Steps four through seven guide sponsors in planning, executing, and documenting the credibility assessment activities while ensuring transparency and flexibility to address deviations along the way. This iterative process ensures that AI models are rigorously tested and their outputs can be trusted, not just by the FDA but also by the broader pharmaceutical ecosystem.
Addressing Challenges with Clarity
What I appreciate about the FDA’s approach is how it directly addresses some of the most significant challenges in AI adoption. Concerns like data bias, model transparency, and performance drift are explicitly acknowledged and incorporated into the framework. For instance, the guidance emphasizes that data must be “fit for use”—relevant, reliable, and representative of the target population or manufacturing process. It also stresses the importance of methodological transparency, requiring sponsors to detail the methods and processes used to develop their AI models.
This framework represents more than just regulatory guidance—it’s a call to action for those of us working at the intersection of AI and pharmaceutical manufacturing. By adhering to these principles, we can not only meet regulatory expectations but also build AI solutions that are robust, credible, and ultimately transformative for the industry.
Moving Forward
The FDA’s risk-based credibility assessment framework is an important step in shaping the future of AI in pharmaceuticals. As AI strategy consultants, our role is to guide organizations in interpreting and implementing this guidance, ensuring their innovations are both compliant and impactful. With the right approach, AI can deliver on its promise while upholding the trust of regulators, patients, and the public.
The path forward is clear: build trust first, then innovate boldly.
Related Links
- FDA Guidance Announcement: Considerations for the Use of Artificial Intelligence in Pharma
- View the FDA’s latest guidance that serves as the foundation for this blog series.
- Part 2: Sustaining Reliability: Life Cycle Maintenance of AI Models in Drug Development
- Dive deeper into strategies for ensuring AI reliability throughout its lifecycle, including performance monitoring and risk-based approaches.