How to Measure PQS Effectiveness Using the CAPA System

A ICH Q10 Challenge

 

Introduction

ICH Q10 ‘Pharmaceutical Quality System’ (PQS) was adopted in 2008. Implementation of ICH Q10 has been slow to gain traction possibly because it is an unusual guideline in that ‘much of the content of ICH Q10 applicable to manufacturing sites is currently specified by regional GMP requirements’ and ‘ICH Q10 is not intended to create any new expectations beyond current regulatory requirements. Consequently, the content of ICH Q10 that is additional to current regional GMP requirements is optional.

What is also of note in ICH Q10 is that it breaks the Pharmaceutical Quality System down into four elements:

  • Process Performance and Product Quality Monitoring System
  • Corrective Action and Preventive Action (CAPA) System
  • Change Management System
  • Management Review of Process Performance and Product Quality

While most traditional Pharmaceutical Quality Systems would incorporate these elements, they would not necessarily be the fundamental building blocks and would look quite different from a PQS design based exclusively on ICH Q10. These are some of the challenges with ICH Q10 implementation, but what are the benefits?

One of the key benefits to industry from ICH Q10 (Annex 1) is that the demonstration of PQS Effectiveness should facilitate Regulatory Relief in regards to Inspection frequency, Post Approval Changes, Innovative approaches to Process Validation, and Real-Time release.

These are significant benefits, but they are predicated on a nebulous idea – how to demonstrate PQS Effectiveness?

Let’s focus on how to demonstrate PQS Effectiveness one element at a time starting with the CAPA system. ICH Q10 states that the ‘company should have a system for implementing corrective actions and preventive actions resulting from the investigation of complaints, product rejections, non-conformances, recalls, deviations, audits, regulatory inspections and findings, and trends from process performance and product quality monitoring. …… CAPA methodology should result in product and process improvements and enhanced product and process understanding.’

Current EU and FDA regulations require Effectiveness Checks for each CAPA, and in theory, rolling all these together should give a picture of the overall effectiveness. In reality, it does not, since the industry standard is still focussed on the recurrence of the ‘Event’ rather than well thought out, active Effectiveness Checks tied specifically to ‘Root Cause.’ Is there a way to look beyond the individual CAPA effectiveness checks and have a systemic evaluation?

The answer to this question is yes, and the solution is enabled by the embedded ICH Q10 principle that the level of effort should be commensurate with the level of risk.

By comparing the systemic level of Effort of CAPAs over a given time frame to the systemic risk of the Deviation (or other sources data sets for CAPAs) can deliver an overall Systemic CAPA Effectiveness Co-efficient.

 

Average Effort CAPA /Average Diation Risk = Systemic CAPA Effectiveness Co-efficient

Artboard 1 copy

Scoring on this scale can provide a rapid systemic evaluation. Low Co-efficient indicates that the CAPAs are inadequate concerning risk, a Co-efficient close to 1 indicates that the CAPAs are commensurate with risk, and a High Co-efficient indicates the CAPAs are excessive regarding risk. While it may seem strange to consider a CAPA being excessive, this systemic evaluation is about equating effort to risk and sometimes the effort can be disproportionately large compared to the risk, resulting in inefficient use of both personnel and financial resources.

Methodology

To do this evaluation, two separate data sets must be considered:

  • CAPA Data set
  • Source Data set (Deviation for this example)

A Standard Scoring Framework for each must be assigned.


CAPA Data Set

  • Effort Scoring Framework

Source Data Set

  • Risk Scoring Framework

The evaluation can be executed prospectively or retrospectively. Initially, it is recommended to perform this retrospectively to provide a baseline data set that is not biased by the process of introducing a metric.

The evaluation period needs to be fixed, and the same for both data sets to ensure a correct correlation. The evaluation period should be sufficient to represent an accurate assessment of the CAPA system by including the typical spectrum of source events for risk and CAPAs for effort. A period of 6 months to a year may be sufficient.

 

CAPA Scoring System

Firstly, to consider the CAPA data set. There is an inherent hierarchy in CAPAs with respect to effectiveness. When the industry talks about the “Killer CAPA” or “100 year CAPA,” it is invariably referring to the CAPAs that provide an engineered solution to fix the Root Cause. The industry instinctively recognizes the value of these CAPAs, but by the same token, knows that “re-training” – no matter how focussed and well delivered, is just a stop gap. Using this knowledge of the inherent effectiveness of CAPAs, it is possible to build a scoring system.

Artboard 1

For each CAPA, a score is assigned, and an overall average is calculated. Where more than one CAPA is generated from a specific event, then only the highest scoring CAPA should be included.

 

Source Data Scoring System

As the source data for CAPA generation can come from multiple platforms e.g., deviations, customer complaints, regulatory inspections, etc. it may be appropriate to build individual system scoring systems. For this example, the scoring system is based on the deviation system. Risk is evaluated both in terms of patient safety and business impact. Most organizations have informal Quality Risk Assessment built into their deviation classification system, and if this has a sufficient number of classifications, e.g., 4 or more, then this could provide the basis for the scoring system.

Artboard 1 copy 2

For each Deviation, a score is assigned, and an overall average is calculated.

Once these two data sets have been evaluated, the average score of both can be compared to generate the CAPA Effectiveness Co-Efficient.

 

Comparing CAPAs from Different Data Sources

An additional benefit of such a systemic evaluation is that it also allows comparison across the source data systems. Again, considering risk, not all data sources are equal – data sources such as Regulatory Inspection and Customer Complaints should result in a higher CAPA Effectiveness Co-Efficient than those for data source systems such as process performance trending.

 

Continual Improvement

Tracking and trending the CAPA Effectiveness Co-Efficient for different data source systems over time also provides a mechanism to evaluation continual improvement.

 

Learn more about our Quality, Compliance, and Regulatory services by downloading our brochure here.

 

 

About the Author

Henchion, JohnJohn Henchion, Global Director of Quality, Compliance, Regulatory

John has more than 23 years of experience providing technical and consulting services in cGMP pharmaceutical and biotechnical environments. John is an experienced QP with several years’ experience releasing both commercial and clinical material to market. John is a Quality Systems SME. He understands FDA and EU regulations, regulatory guidance documents, and ICH guidelines and has applied the knowledge to develop risk based approaches for several applications. He is familiar with implementation of Lean principles and tools, Kaizen events, root cause analysis, visual workplace, standardized work and process mapping of supply chain.