OR WAIT null SECS
It is essential to understand the critical elements of validation extensions to ensure accurate process or product quality measurements.
The successful completion of analytical method transfer (AMT) is a regulatory expectation for the extension of the validation status to other laboratories. The demonstration of equivalent test results and, therefore, an acceptable level of reproducibility when testing at a different location can limit the potential risk to the patient. Acceptable reproducibility also limits the risk of failing test results for the biopharmaceutical firm because established probabilities of passing specifications can be maintained. Similar, postvalidation changes in method components should be monitored and controlled to avoid significant (negative) changes for material or product release probabilities.
Analytical method validation (AMV) guidelines exist from several recognized sources.1–5 Detailed validation guidelines for alternative microbiological test methods also exist.6–7 In addition, a series of practical tips and discussions for AMV and related topics was recently published.8 However, some topics are currently not sufficiently covered in recognized sources. For example, how can we demonstrate method comparability for new methods, extend the validation status onto other laboratories or other test method components, and maintain this validation status over time? What are acceptable levels for differences in method performance and when is a method no longer suitable? This article discusses practical concepts on how to ensure successful validation extensions.
When replacing approved test methods with improved ones, analytical method comparability (AMC) data should be submitted together with the method description and validation results.8 Once a method is approved and in routine use, it should be maintained in an analytical method maintenance (AMM) program that can be administered through the validation master plan (VMP).8 If done well, this will ensure—like all postvalidation activities—consistent (accurate and precise) production process and product quality measurements.8 What exactly are the critical elements of good validations, validation extensions, or suitable validation maintenance? The answer lies mostly in the preset acceptance criteria for method performance and, of course, the actual validation results obtained. For example, if changes in analytical method components cause a change in test results and, therefore, in process or product quality measurements, we should capture when we will have exceeded method suitability limits. In other words, if the analytical method change will cause a predictable shift or spread of results with respect to specification(s), and therefore negatively impact the probability of releasing material, we should monitor this. To monitor and possibly compensate, we must first set reasonable suitability limits, then continuously control the overall method performance. Often, the most difficult part may be to estimate the associated risk of changed results with respect to both patient and firm, and from this, to set reasonable acceptance criteria. Once we truly understand why and when there will be a need for method improvement, we will likely know what should be done to compensate for the difference.
Each production process has an associated probability for the rate of rejections that can be readily calculated by relating specifications and production process performance. However, instead of having to deal with only two probabilities (pass or reject) that are "visible" and monitored by statistical process control (SPC), we should consider two additional possibilities for all reported results. Therefore, there is a total of four possible cases for releasing product or material, of which three should be avoided as often as practically possible. The four cases for reported test results are illustrated here.
Measured results are within established specifications.
Measured results are outside established specifications (OOS).
Cases 1A and 2A are routinely monitored by SPC. Cases 1B and 2B originate from other uncertainties such as imperfect test method performance or poor sampling and are not readily visible by SPC. Cases 2A and 2B are obviously not desirable because the firm cannot process nor sell this product or material. Case 1B constitutes a risk primarily to patients, but also means a risk to the firm if adverse product-related reactions or over or under-dosing would actually occur. Case 2B constitutes a loss solely to the firm and should also be avoided mainly for profit reasons although other problems may also arise from this situation.
For our validation extension acceptance criteria, we should primarily set acceptable protocol limits from SPC with relation to specifications. We should consider the likelihood and impact for cases 1B and 2B, and avoid as much as possible measurement errors as part of the AMM program. Inaccurate or imprecise measurements will always cause a lower than ideal probability of observing results within specifications. The acceptance criteria for AMV and its continuum requirements must, therefore, ensure the low likelihood for all cases but 1A.
Table 1. Analytical method transfer (AMT) execution matrix
To meaningfully estimate risk to patients and the firm, we must understand our process data and integrate test measurement aspects into our risk-based validation strategies. Good risk management tools will dictate how much assay performance characteristics can deviate from ideal. This will then set limits on how much we can tolerate over time for a test method to deviate from ideal (100% accurate and precise). It should also now become apparent why it is so important to maintain our validation status with an AMM program. When this is ignored, we negatively affect all four cases. (Negative here means increased risk to patient or firm). Although undetected, negative effects will occur for the "invisible" cases 1B and 2B because measurement errors are not captured by regular SPC. This may also cause the lack of process understanding and control, and may also lead to conflicts with current regulatory expectations (process analytical technology, [PAT]) and may impact a firm's profits in the long run.9
Validated analytical methods can be transferred from one laboratory to another without the need for revalidation at the receiving laboratory.8,10 A typical AMT is accomplished by testing at the sending and receiving laboratories in a round-robin format. Testing is performed on three different product lots over three days, using two operators and two instruments in each laboratory.8,10 Reproducibility of test results, in and between laboratories, is demonstrated in Table 1 by evaluating intermediate precision (different operators, instruments, days, and product lots at each site) using an analysis of variance (ANOVA) and by comparing the differences in mean results for each lot between both sites.8,10 For each AMT, preset acceptance criteria for intermediate precision and for the absolute differences between sites are derived and justified from the validation at the sending laboratory.8
The AMT reports should include descriptive statistics (means, standard deviations and coefficients of variance), comparative statistics (ANOVA p-values) for interlaboratory results, and the differences-of-mean values for both (or each) laboratories. Each report documents evidence that the transferred test method is suitable (qualified) for testing at the receiving laboratory.
For cases where the ANOVA p-value is less than 0.05, secondary acceptance criteria should be established for the comparison-of means and variability of the results to demonstrate the overall laboratory-to-laboratory reproducibility of test results. It is advisable to include a numerical fall-back limit (or percentage) because the likelihood of observing statistical differences may increase with the precision of the test method. In addition, some differences (bias) between instruments, operator performances and days are expected.8 We should tailor our acceptance criteria for overall (intermediate) precision and for the maximum tolerated difference between mean laboratory results (accuracy or matching) to minimize the likelihood of obtaining OOS results (2A and 2B) or 1B results.8 The setting and justification of all acceptance criteria must strike a balance and is a critical part of each protocol. A detailed AMT case study was presented elsewhere.8
Because we need to demonstrate equality or improvement whenever approved methods are replaced, which and how are method performance characteristics compared? Table 2 provides guidance on which validation characteristics to use for comparability protocols for each assay type per ICH Q2(R1). All qualitative tests should contain a comparison of hit-to-miss ratios (for "specificity") between the approved method and the new method. If a qualitative limit test is exchanged, the detection limit (DL) of the new method should be compared and should be equal or lower for the new method. For all quantitative methods, the method performance characteristics accuracy and precision (intermediate precision) should be compared.8
Table 2. General guidance for using appropriate statistics to assess method comparability from validation characteristics per ICH Q2(R1)
It is of great regulatory concern whether results may change overall by drifting (change in "accuracy" or "matching") or by an increase in data spreading ("intermediate precision"). An increase in data spreading or lack of precision will mostly increase the likelihood of observing cases 1B or 2B and should be avoided. A drift in results or "lack of matching" may require a change in the specifications. This drift in release results can occur in two directions, lower or higher results. Both directions are not acceptable outcomes for the demonstration of accuracy, and testing for equivalence between methods is therefore correct.8 The goal of comparisons of other characteristics (e.g., DL) is different as at least two outcomes are acceptable that is to be equivalent or superior meaning a lower DL.8 A third comparison category is the demonstration of noninferiority and is usually the easiest to pass for comparability. However, we should keep in mind that the use of any of the comparability categories (noninferiority, equivalence, superiority) using ICH E9 and Committee for Proprietary Medicinal Products (CPMP) guidance documents should be properly chosen and justified.8,11–13 In other words, noninferiority testing may be justified for the comparison of some primary characteristic, such as DL, if other secondary criteria (e.g., increased number of tests or test samples) can compensate for the small level of inferiority of the primary comparison characteristic.8
Quantitative limits (QLs) could also be compared. However, both QLs would have to be estimated by the same principle (e.g., estimated by regression analysis). A low QL is desirable because it will let us quantitatively report and monitor low-value results by SPC. There are several ways to compare QLs. For example, we could compare the regression coefficients of both linear assay response curves to estimate both QLs and would also get a general idea how accuracy and precision characteristics compare over the assay range. Table 2 constitutes a general guidance. Particular examples for non-inferiority, equivalence, and superiority testing to demonstrate method comparability were provided and discussed elsewhere.8
No matter which comparability category we may use for a statistical comparison (with p = 0.05), a protocol should provide the design of experiments to be done and the pre-specified value for the allowable difference in results. The prespecified maximum allowable difference is illustrated in CPMP's Points to Consider on the Choice of Non-Inferiority Margin.13 The allowable difference should be set similar to AMV or AMT. The difference should be set and justified by relating specifications to SPC data and considering the likelihood of observing any of the four cases (1A, 1B, 2A, and 2B).8
1. International Conference on Harmonization. Q2(R1), Validation of analytical procedures: text and methodology. Geneva, Switzerland; 2005, Nov.
2. The Fitness for Purpose of Analytical Method, Eurachem Teddington, UK; 1998. www.eurachem.ul.pt/guides/valid.pdf
3. Traceability in Chemical Measurement, Eurachem/CITAC. Teddington, UK; 2003. www.eurachem.ul.pt/guides/ EC_Trace_2003.pdf .
5. US Food and Drug Administration. Draft guidance for industry. Analytical procedures and methods validation. Rockville, MD; 2000.
6. Parenteral Drug Association. Technical Report 33, evaluation, validation and Implementation of new microbiological testing methods. Bethesda, MD.
7. European Pharmacopeia. Alternative methods for control of microbiological quality. Supplement 5.5 [07/2006:50106] (December 2005).
8. Krause SO. Validation of analytical methods for biopharmaceuticals—A guide to risk-based validation and implementation strategies. PDA/DHI Publishing; 2007, Apr.
9. US Food and Drug Administration. Guidance for industry PAT—A framework for innovative pharmaceutical development, manufacturing, and quality assurance, Rockville, MD; 2004.
10. ISPE Good Practice Guide: Technology Transfer. International Society for Pharmaceutical Engineering. 2003; Tampa, FL.
11. Statistical Principles for Clinical Trials, ICH E9 (1998). www.emea.eu.int/ pdfs/human/ich/036396en.pdf.
12. Points to Consider on Switching Between Superiority and Non-Inferiority, CPMP (2000). www.emea.eu.int/pdfs/ human/ewp/ 048299en.pdf.
13. Points to Consider On The Choice Of Non-Inferiority Margin, CPMP (2004).