Formal Method Validation

February 2, 2008
BioPharm International, BioPharm International-02-02-2008, Volume 2008 Supplement, Issue 2

The development and optimization process can improve a method, but validation does not. Validation is the final proof that regulations and expectations are met.

When developing a new method for a new biopharmaceutical product, optimizing an existing method for an existing product, or when changing a release method for a licensed product, many development and validation elements should be considered.1–4 Analytical method validation (AMV) follows analytical method development (AMD).5

The AMV report should formally verify that the method is valid (from a quality and product-release perspective) and validated (from a compliance perspective). The AMV protocol contains a summary of AMD results for the new method, and, for changed methods, historical data (AMV and release data) generated using the current method. It also provides current or expected in-process and product specifications, which determine whether the new method is suitable for comparing product quality attributes to specifications.

Table 1. Summary of minimum AMD/AMV requirements for a new method based on ICH Q2(R1)

Portions of the AMD data that are summarized in the AMV protocol may not need to be repeated during validation as long as the AMD data were generated under GMP conditions. Therefore, AMV should not be used to modify or change critical assay elements (for example, statistical data reduction). We must be careful not to invalidate the AMD data that was used to establish robustness, system suitability, and possibly other performance characteristics.6 The formal AMV process should ideally only be a confirmation of test method performance that can be considered the least significant overall task from an operational perspective because we are usually only confirming but not improving anything in this process. The AMV process is often the most important task from a compliance perspective (inspections and submissions) and needs to be executed well mostly for this reason.6

Table 1 lists all International Conference on Harmonization (ICH) characteristics that may apply to a particular test procedure, including the corresponding minimum requirements, reported results, and acceptance criteria. Some AMD and AMV elements were added to the required ICH characteristics. In practice, more data may need to be generated. For example, three spike levels may not be sufficient to evaluate accuracy and repeatability precision over the valid assay range. Several critical elements of the AMV protocol are discussed in more detail below.

Intermediate Precision

Intermediate precision is often the most important one of all test method performance characteristics evaluated due to the fact that it represents the to-be-expected laboratory reliability at any given day. This usually reflects the true relationship between process capability and analytical capability.6

It is only necessary to evaluate one product batch to determine intermediate precision in the AMV protocol. We are not evaluating production process variability, so controllable factors should be held constant to obtain meaningful results for variable factors (for example, different operators). AMD is the proper time to evaluate several product batches to provide an overall estimate of the method's capability to stay unaffected by variations in the sample matrix. Batch-to-batch variation can be readily estimated once analytical (and sampling) variation is known.

Table 2. AMV execution matrix

The overall intermediate precision validation result (expressed in % coefficient of variation, CV) can be provided to the production process control unit for production process monitoring. This estimate reflects the expected analytical variability contributed by the test system over time. However,, the best estimate for analytical capability is provided by the assay control performance during routine QC operations. If the control material and its testing is similar to the sample (and reference standard, if applicable) in each assay run, the variation of this control point is the most reliable laboratory performance indicator and should be used exactly as such in all post-validation activities.

Figure 1

Intermediate precision results can be generated as a full or partial factorial design by rotating operators, days, and instruments and possibly other critical factors as identified during the AMD studies.6 A simple execution matrix design is illustrated in Table 2. An analysis of variance (ANOVA), where results can be grouped by each operator, day, and instrument, and analyzed in one table, can be used to evaluate this method performance characteristic. It is advisable to include a numerical secondary limit because the likelihood of observing statistical differences increases with the precision of the test method and some differences (bias) are normal and should be expected.6 An alternative statistical tool is a mixed-linear model analysis.7 The mixed linear model analysis is often more practical and useful than ANOVAs. Both statistical methods are accepted by most regulatory agencies although a mixed linear model analysis appears to be preferred in Europe.7 An example for the mixed linear model analysis results using the set up from Table 2 is shown in Table 3 (using the execution matrix of Table 2).6

Following Tables 2 and 3, the mixed linear model calculations for intermediate precision for each of the variances were performed as follows: The output variable = Mean + Instrument + Operator + Day + Residual, where:

  • The mean is a fixed effect indicator variable and denotes the overall mean of all results.

  • The component Instrument is a random effect variable denoting the effect due to instrument-to-instrument variation.

  • Identical to above, the components Operator and Day are also both random effects.

  • Residual denotes a measurement variation not due to any of the above stated effects.6

An example for an automated ELISA test method is given in Table 3.6 The protocol acceptance criteria were: Overall CV to be no more than 15.0% and individual component CV (i.e., "Instrument") to be no more than 10.0%. Results suggest that operators and days are likely not critical method components. This is preferred with regards to lesser expectations for the demonstration of operator proficiency because there is a lower risk that test results are potentially affected when using new operators in the future. There is however a significant component effect observed among the three different instruments used during the AMV studies (CV = 11.6%). Although somewhat typical of an automated procedure with minimum operator involvement, it is still something that will significantly contribute to overall assay variability. Having to deal with the failure to pass this protocol acceptance criterion, we may want to consider evaluating which particular automation steps for this assay contribute to this variability by dissecting this process into smaller steps. Once "loose" steps have been identified, we should be able to fix this and decrease instrument-to-instrument variability, and, we should then be able to pass all protocol acceptance criteria. Similarly, the unidentified residual variation (CV = 11.6%) should be dissected first by reviewing all AMD data thoroughly. If needed, more detailed component analysis could be done if the target is to improve the overall precision of 14.6%.6

Table 3. Mixed linear model results for intermediate precision matrix6

AMV Acceptance Criteria

In-process and product specifications should be based on analytical capability.8 The mirror image must also be considered—AMV acceptance criteria must be related to specifications—because the observed process variability is the sum of actual process variability, test method variability, and sampling variability as illustrated in Equation 1 below.

[observed process variability]2 =

[actual process variability]2 +

[test method variability]2 +

[sampling variability]2

Known values for the observed (measured) process variability, the test method variability (intermediate precision), and sampling (batch uniformity as measured) variability can be used to estimate actual process variability and vice versa. For example, when the observed process variability is 10%, the method intermediate precision is 8%, and the sampling variability is 0% or negligible, the actual process variability is approximately 6%.

(0.10)2 = X2 + (0.08)2

Solved: X = 0.06

Figure 1 illustrates which data sources should be evaluated for a new method and new drug product with an understanding that sometimes only reliable estimates for process variability and method variability may exist at the time of AMV studies. Naturally, the (target) specifications are pushing the acceptance criteria towards less risk (narrow limits) for the patient, while actual process, sampling and method performance imperfections push from the opposite side towards wider limits. Once we have estimates for each source of variation we can then set balanced expectations that can protect the patient (and firm). Simply put, the less data sources are available or evaluated, the more "gambling" will occur, because of the existing uncertainty, when setting AMV protocol acceptance criteria. The less method performance requirements are understood, the less predictable will be the understanding of the potential impact thus leading to greater risks to patients and the firm.6 When deriving acceptance criteria, balanced limits for the AMV protocol for accuracy, precision, and other assay characteristics can be derived from the specifications (or target specifications) and historical data sources. Clearly, if the specifications are not established, the method cannot be fully validated. Ultimately, we need balanced criteria that challenge the new method's ability to assure product quality with the need to formally validate the new method.6

References

1. International Conference on Harmonization. Q2(R1), Validation of analytical procedures. Current Step 4 Version. Geneva, Switzerland; 2005.

2. Eurachem Guide. Fitness for Purpose of Analytical Methods. Teddington, UK; 1998.

3. Center for Drug Evaluation and Research. Guidance for Industry. Bioanalytical method validation. Bethesda, MD; 2001.

4. Center for Biologics Evaluation and Research. Draft Guidance for Industry. Analytical procedures and methods validation. Bethesda, MD; 2000.

5. Krause SO. Development and validation of analytical methods for biopharma-ceuticals, part I: development and optimization. BioPharm Intl. 2004; 16(10):52–61.

6. Krause SO. Validation of analytical methods for biopharmaceuticals–a guide to risk based validation and implementation strategies. PDA/DHI Publishing. Bethesda, MD. 2007.

7. EURACHEM/CITAC Guide CG4, Tracebility in Chemical Measurements; 2003.