OR WAIT null SECS
When replacing an existing method, it may be necessary to compensate for a significant difference between the current and new method by adjusting the specifications.
When developing a new method for a new biopharmaceutical product, or when changing a release method for a licensed product, many development and validation elements should be considered. Currently, these are incompletely covered by regulatory guidelines.1-4 Analytical method validation (AMV) follows analytical method development (AMD), which we described in the October issue of BioPharm International.5 Often, identifying and fully developing an appropriate test methodology is the critical task in the overall process.
Stephan O. Krause, Ph.D.
The AMV protocol and report should formally verify that the method is valid (from a quality and product-release perspective) and validated (from a compliance perspective).6,7 The AMV protocol contains a summary of AMD results for the new method, and, for changed methods, historical data (AMV and release data) generated using the current method. It also provides current or expected in-process and product specifications, which determine whether the new method is suitable for comparing product quality attributes to specifications.
Portions of the AMD data that are summarized in the AMV protocol may not need to be repeated during validation as long as the AMD data were generated under GMP conditions. Therefore, AMV should not be used to modify or change critical assay elements (for example, statistical data reduction). We must be careful not to invalidate the AMD data that were used to establish robustness and system suitability criteria and which were likely used to derive some of the acceptance criteria for the AMV protocol.8,9
Table 1. Summary of Minimum AMD/AMV Requirements for a New Method Based on ICH Q2B
All ICH validation characteristics should be evaluated during AMV. Table 1 lists all ICH characteristics that may apply to a particular test procedure, including the corresponding minimum requirements, reported results, and acceptance criteria (see also the first table in Reference 5). Some AMD and AMV elements were added to the ICH characteristics. In practice, more data may need to be generated. For example, three spike levels may not be sufficient to evaluate accuracy and repeatability precision over the valid assay range. Multiple critical elements of the AMV protocol are discussed in more detail below.
Table 2. AMV Execution Matrix
It is only necessary to evaluate one product batch to determine intermediate precision within the AMV protocol. We are not evaluating production process variability, so controllable factors should be held constant to obtain meaningful results for variable factors (for example, different operators). AMD is the proper time to evaluate several product batches to provide an overall estimate of batch-to-batch precision.
The overall intermediate precision validation result (expressed in % Coefficient of Variation, CV) can be provided to the production process control unit for production process monitoring. This estimate reflects the expected variability contributed by the test system at any given day. Often, the intermediate precision of the analytical method is the most critical component of the overall observed production process variability (see also the AMV Acceptance Criteria section of this article).
Figure 1. Sources of AMV Acceptance Criteria
Results can be generated with a partial factorial design by rotating operators, days, and instruments (and possibly other factors) as shown in Table 2. Analysis of variance (ANOVA) allows results to be grouped by each operator, day, and instrument and analyzed in one large table.6,7 It is advisable to include a numerical secondary limit in the AMV protocol because the likelihood of observing statistical differences increases with the precision of the test method, and some differences are normal and should be expected.6,7 Even though it takes a total of eight days, the AMV execution matrix in Table 2 is efficient for intermediate precision (three days) and all other validation characteristics (five days) that may apply. The partial factorial design of the ANOVA test can determine differences within variability factors (for example, instruments) and relate them to other variability factors (for example, days and operators).
Table 3. Deriving the Number of Significant Digits for Reported Test Results
Data generated for accuracy may be used to cover required data for other validation characteristics, such as repeatability precision, linearity, assay range, and quantitation limit (QL).6 For example, when product purity and impurities are simultaneously quantitated (ICH categories II and IV, as in Table 1 in the first part of this article) the detection limit (DL) is not required.
Thirty is the absolute minimum number of data points needed to establish release specifications for a new test method. The appropriate number of significant digits that should be used for the product specifications, reflecting the level of uncertainty in reported test results, should be derived during AMV. A method of computing significant digits is covered .
When replacing an existing method, it may be necessary to compensate for a significant difference between the current and new method by adjusting the specifications. When testing formulation excipients, significantly improved assay precision (intermediate precision) will lead to an overall decrease in observed product batch-to-batch variability, which should be reflected in a tighter range of specifications.
Often, the midpoint or target value must be adjusted for biological assays where accuracy cannot be derived from comparisons to certified standards but is rather derived from a product-specific historical target value (for example, protein impurities from the fermentation process are tested by ELISA).
In-process and product specifications should be based on analytical capability.
The mirror image must also be considered — AMV acceptance criteria must be related to specifications — because batch-to-batch variability is the sum of process variability and test result variability (assuming that sampling variability is negligible).
[observed process variability]2 = [actual process variability]2 + [test method variability]2
Known values for the observed (measured) process variability and the test method variability (intermediate precision) can be used to estimate actual process variability and vice versa. For example, when the observed process variability is 10% and the method intermediate precision is 8%, the actual process variability is approximately 6%.
(0.1)2 = X2 + (0.08)2
Solved: X = 0.06
When deriving acceptance criteria, reasonable limits for the AMV protocol for accuracy, precision, and other assay characteristics can be derived from the specifications. In addition, when replacing test methods, historical data for the current method (AMV and release results) and the new method (AMD data) can be used to derive AMV acceptance criteria. Ultimately, we need criteria that balance the new method's ability to assure product quality with the need to validate the new method. Figure 1 summarizes all sources of AMV acceptance criteria, including supporting sources that can be used when the "must-consider" sources are not sufficient for all acceptance criteria.6,7
Once the AMV package for the new method is completed, it can be submitted for approval to the US regulatory authorities as part of the product license submission. Significant changes to a test method of a licensed product that impact in-process or product specifications usually require submission of new, detailed test procedure, together with the AMV results, in the form of a prior-approval supplement (PAS) to the existing product license. The method cannot be used until it is approved, which usually takes three to six months. If a release method is changed and is also used in other regions that require release testing (for example, Europe), then an analytical method transfer (AMT) protocol must demonstrate the reproducibility of the test results at both release sites.
Similar regulatory submission requirements and approval times exist at release sites in other regions (for example, Type II variation in Europe).
Keep in mind that the development and optimization process can improve a method, but that is not the role of validation. Validation merely provides evidence for the suitability and validity of the developed and optimized method.
Test results and the corresponding release specifications should reflect the expected uncertainty. There are numerous ways how we can express this. Using the appropriate number of significant digits is a common approach in the biopharmaceutical industry. However, it is not obvious how to accomplish this in a compliant and consistent manner.
We suggest that a simple and consistent way of generating the appropriate number of significant digits is to use a widely accepted standard practice, such as the American Society for Testing and Materials (ASTM) E29-02 (and E456-96 for terminology).14,15 ASTM E29-02 requires that the uncertainty in test results (and specifications) is based on the repeatability precision results of the AMV,16 which makes sense because repeatability precision is a required ICH Q2A and Q2B validation characteristic for all quantitative AMVs that support quantitative release specifications (see also the first part of this article).5 In addition, by the time an AMV is executed, a final version of the SOP is already in place and quality control operators have been trained.
Following E29-02 has two main advantages. One, this practice provides a reference to an accepted document. Two, this practice provides a justifiable, scientific means of retaining more significant digits than most other procedures. Reporting the maximum number of significant digits can be advantageous when monitoring and trending in-process controls — in particular, biological test procedures with relatively poor precision.
Table 3 shows how to generate the appropriate number of significant digits for AMV repeatability precision data. The standard deviation (s) of a minimum of n = 6 results is used to determine the appropriate number of significant digits. The result is rounded between 0.5s and 0.05s:
0.5s = 0.5000 x 0.3713 units = 0.1857 units.
0.05s = 0.05000 x 0.3713 units = 0.01857 units.
The rounding unit is 0.1 units. A test result (for example, 31.248 units) should therefore be reported as 31.2 units.16 Release specifications should reflect the same uncertainty in results (for example, 28.0 to 34.0 units).
I would like to thank Patricia Bonaz and
editorial board for their helpful review of this article.
1. ICH. Validation of Analytical Procedures. Q2A. Federal Register 1995; 60.
2. ICH. Validation of Analytical Procedures: methodology. Q2B. Federal Register 1996; 62.
3. CDER. Guidance for Industry. Bioanalytical Method Validation. Bethesda MD: FDA; 2001.
4. CBER. Draft Guidance for Industry. Analytical Procedures and Methods Validation. Bethesda MD: FDA; 2000.
5. Krause SO. Development and validation of analytical methods for biopharmaceuticals, part I: development and optimization. BioPharm International 2004; 16(10):52-61.
6. Krause SO. Good analytical method validation practice, part II: deriving acceptance criteria for the AMV protocol. Journal of Validation Technology 2003; 9(2):162-178.
7. Krause SO. Good analytical method validation practice, part III: data analysis and the AMV report. Journal of Validation Technology 2003; 10(1):21-36.
8. Green C. A step-by-step approach to establishing a method validation program. Analytical Method Validation, special edition published by the Institute of Validation Technology (IVT). Royal Palm Beach FL; 2001.
9. Krause SO. Good analytical method validation practice, part I: setting-up for compliance and efficiency. Journal of Validation Technology 2002; 9(1):23-32.
10. ICH. Specifications: test procedures and acceptance criteria for biotechnological/biological products. Q6B. ICH Harmonized Tripartite Guideline. Geneva, Switzerland: ICH; 1999.
11. Krause SO. Qualifying release laboratories in Europe and the United States. BioPharm International 2004; 17(3):28-36.
12. Krause SO. Analytical method transfer. Presented at European Validation Week; 2004 Feb 2-5; Amsterdam, Netherlands.
13. ISPE. Good practice guide: technology transfer. Tampa (FL): International Society for Pharmaceutical Engineering; 2003.
14. ASTM. Standard practice for using significant digits in test data to determine conformance with specifications. ASTM E29-02. West Conshohocken (PA): ASTM; 2002.
15. ASTM. Standard terminology for relating to quality and statistics. ASTM E456-96. West Conshohocken (PA): ASTM; 1996.
16. Krause SO. How to prepare validation reports that will avoid regulatory citations in quality control. Presented at European Validation Week; 2004 Feb 2-5; Amsterdam, Netherlands.