Qualifying Release Laboratories in Europe and the United States

Published on: 
BioPharm International, BioPharm International-03-01-2004, Volume 17, Issue 3
Pages: 40–45

The European Union requires final container testing of US-manufactured biopharmaceutical products to be performed in Europe for release into the European market. Similarly, but less strictly enforced, the US requires final container testing in the US for European-manufactured biopharmaceutical products before release.

The European Union requires final container testing of US-manufactured biopharmaceutical products to be performed in Europe for release into the European market. Similarly, but less strictly enforced, the US requires final container testing in the US for European-manufactured biopharmaceutical products before release.

Final container release tests only need to be validated at the sending laboratory with the exception of microbiological and animal test methods, which are not transferable and must be validated at each testing site (US and Europe). The receiving laboratory must be qualified to release product using validated procedures, available personnel, and equipment, but validation does not need to be repeated at the point of product release.

The manufacturing company should validate all its analytical methods at the production site and sending unit according to ICH Q2A and Q2B guidelines.1-3 In addition to complying with regulatory requirements, this ensures that the production process and product quality are fully supported by testing results for in-process material and final containers using identical testing conditions. It is not required to ICH-validate analytical methods in Europe as long as they are identical - or at least equivalent - to those of the sending laboratory. However, it is necessary to demonstrate that all analytical test results are reproducible by both testing laboratories. Therefore, the objective of analytical method transfer (AMT) is to demonstrate that the release laboratory is qualified and suitable to release US-manufactured biopharmaceuticals into the European market (and vice versa).

This article - a practical guide to transferring analytical methods - uses the International Society for Professional Engineering's Good Practice Guide: Technology Transfer (regulatory agencies accept the methods in this guide).4 Through a practical case study, this article shows how to perform and document the transfer of an analytical method that will be compliant (that is, qualified) for release into Europe and, furthermore, is fully integrated in order to control the production process and product quality.

Transfer Strategy

Qualifying a release laboratory requires demonstrating inter-laboratory reproducibility of test results. The method transfer strategy should be captured under a method transfer master plan or protocol. This document should cover the general AMT execution matrix (Table 1), the reported transfer results, calculations and statistics used, and the general source for the acceptance criteria.

5

Table 1: AMT execution matrix

A typical AMT is accomplished by round-robin testing at the sending and receiving laboratories. Testing is performed on three different lots over three days using two operators and two instruments at each testing site.5 (See Table 1. Partial or full factorial designs are not applicable here as we cannot pool data. Besides, switching or rotating operators or instruments would fall under partial or fractional factorial.)

Reproducibility of test results, within and between both laboratories, is demonstrated by evaluating intermediate precision (different operators, instruments, days, and product lots at each site) and the differences in mean results for each lot between both sites.6 The results of both laboratories are statistically compared by an analysis of variance (ANOVA). Pre-set acceptance criteria for intermediate precision and the absolute differences between sites are derived and justified from the validation at the sending laboratory for each method transferred.7,8 Reports will include descriptive statistics (means, standard deviations, and coefficients of variance), comparative statistics (ANOVA p-values) for comparison of inter-laboratory results, and the differences-of-mean values for both laboratories.

Example of an MTP Section

Individual method transfer protocols (MTPs) must describe in detail how to execute each AMT indicating the samples to be tested and the spiking experiments to be done in order to demonstrate accuracy, using appropriate biological or chemical reference materials (that is, accepted European standards, if applicable). MTPs should provide detailed information on the testing conditions and acceptance criteria to be met. If spiking experiments are performed with European standards, the accuracy and precision of the results of the receiving unit should be similar to those of the sending unit.4 All pre-set acceptance criteria - derived and justified from any combination of method validation, historical data, or product specifications - should be given in each MTP.7 Data and results for each AMT are summarized in individual method transfer reports (MTR). Each MTR documents evidence that a particular test method is suitable (qualified) for product release into the designated market.

Advertisement

All AMTs must demonstrate reproducibility between test results from the sending and receiving laboratories, as indicated by single-factor ANOVA at the 95% confidence level (p > 0.05). In cases where p ≤ 0.05, acceptance criteria must be established for the comparison-of-means and variability of the results in order to demonstrate the overall lab-to-lab reproducibility of test results. It is advisable to include a numerical limit (or percentage) because the likelihood of obser-ving statistical differences increases with the precision of the test method. In addition, some differences (bias) between instruments, operator performances, and days are normal.6

Table 2: Historical CZE assay performance for the assay control and BioProduct

The absolute difference of the observed-mean percentages between instruments can be related to the product specifications and historical-mean results; this should be derived (and justified) from method validation.8 We should tailor our acceptance criteria for overall (intermediate) precision and for the maximum tolerated difference between laboratory results in order to minimize the likelihood of obtaining out-of-specification (OOS) results.8 The setting and justification of all acceptance criteria must strike a balance and is a critical part of each MTP. The AMT case study which follows compares results from sending and receiving laboratories to illustrate how reproducibility acceptance criteria are derived and justified in the MTPs.

An AMT Case Study

In this case study, a biopharmaceutical product, referred to here as "BioProduct," is manufactured in the US. The production process and all analytical methods for in-process material and final containers are ICH-validated. BioProduct comes in one fill size and is to be sold in Europe. All analytical release methods must be transferred from the US production and testing facilities to the European release laboratory. The European testing laboratory has identical, qualified instrumentation and trained laboratory technicians for all methods to be transferred. Six final container samples from each of three lots of BioProduct will be assayed in a round-robin matrix at both laboratories for each test procedure to be transferred (Table 1).

ICH Method Validation Characteristic

The AMT for any particular analytical method follows the general round-robin execution matrix. As results to be generated and acceptance criteria to be derived and justified are similar, one hypothetical method transfer - for capillary zone electrophoresis (CZE) - is illustrated in detail. CZE is used to test for product purity in final containers of BioProduct.

The acceptance criteria for MTPs should include limits for the intermediate precision among each set of replicate measurements of each product lot and laboratory (n = 6). The variability within and between laboratories should be analyzed with ANOVA (p > 0.05). In addition, the limits for the maximum difference of means (n = 3) for each of three lots between laboratories should be established. All acceptance criteria should be derived from the method validation.4, 5 These, in turn, should be derived from the product specifications, method development, or historical assay performance data for each test procedure.7

A set of pass-fail limits that simultaneously apply to both testing sites makes technical sense. After all, identical procedures, materials, and instruments are used. Acceptance criteria derived from the method validation protocols for all MTPs have the advantage of being consistent with the overall transfer of product release testing, facilitating regulatory approval. Qualified methods based on systematically and consistently derived acceptance criteria will be easier and faster to license. In addition, production is not as likely to send marginally acceptable material because the pre-set transfer limits will serve as a filter.

We need to set reasonable transfer criteria because we must demonstrate reproducibility for our license application and also to prevent potential OOS results. Future OOS results may be generated if both laboratories are not truly comparable. This could be difficult to investigate as we may have accepted a certain difference (bias) within the approved AMT. It is in everyone's interest that both laboratories produce similar test results; it is very costly to a company when product lots fail specifications and cannot be sold.

Transfer of the CZE Method

Using Historical Data

To verify that the MTP acceptance criteria can be taken directly from the method validation protocol, we should start with the evaluation of the historical assay performance of the CZE final container test procedure to determine the purity of BioProduct (Table 2).

We can immediately determine that the assay performance has not changed significantly over the last 12 months by comparing last year's control data to the historical control data - 1.1% versus 1.0% for standard deviations (SDs) and 91.4% versus 91.2% for mean. This is sufficient evidence that the test system has maintained acceptable precision (SD = 1.1%) and accuracy (only 0.2% drift from previous mean) over the last 12 months. In addition, we can compare the variability in purity results between the last 90 data points for the control and the product purity results. This is good evidence that our production process is under control and lot-to-lot variability in product purity is relatively low, because the lot-to-lot coefficient of variation (CV) of 1.6% is not significantly higher than the control CV (1.2%) during that time. We can also establish that the observed mean of BioProduct purity (95.1%) is more than 3 SDs (3 x 1.5% = 4.5%) above the product specifications (no less than 90%).

The acceptance criteria for the ANOVA p-values (from intermediate precision), differences-in-means (from intermediate precision), and CVs (from repeatability precision) in this MTP are taken from the method validation protocol of the sending laboratory. The method validation protocol in an MTP has the acceptance criteria listed in Table 3.

The CZE method validation passed all pre-set acceptance criteria including those of Table 3. Comparing the release data (Table 2) to the acceptable limits (Table 3) provides the desired verification that we can use these data also for the AMT for the product purity testing. The ANOVA p-value of 0.05 is certainly acceptable, as 95% confidence levels are generally used to evaluate statistical equalities (or inequalities). The limit for the acceptable difference in mean purity between both laboratories (absolute 2.0%) serves also as a secondary limit when the ANOVA p-value is below 0.05. This appears to be a reasonable choice. The absolute difference limit of 2.0% is not too tight, when compared to the data of Table 2 (control SD = 1.1%, determined under intermediate precision conditions) so that we can compensate for an expected minor testing bias, nor is it too wide to permit potential future OOS results at the release site. The 2.0% limit, when compared to the difference between product specifications and historical product purity (95.1% - 90% = 5.1%), should keep undesired assay results to a minimum at the release laboratory. The limit for acceptable intermediate precision of 2.0% (relative %) also appears to be reasonable for the data sets of n=6 when compared to the historical assay control data (CV = 1.2%).

Both laboratories should be capable of generating purity results with similar precision and accuracy as both laboratories are testing identical lots with similar test systems. As there is always a chance that acceptance criteria will not be met, the MTP should state that if acceptance criteria for any of the reported results are not satisfied, the overall results must demonstrate that the test systems are comparable and suitable to release BioProduct into the European market. However, using this statement should be avoided - by passing all acceptance criteria - as it will otherwise give the overall impression that test systems are not in control.

References

1. ICH. Validation of analytical procedures. Q2A.

Federal Register

1995; 60.

2. Scypinsky S, et al. Pharmaceutical research and manufacturers association acceptable analytical practice for analytical method transfer. Pharm. Tech. 2002; 26(3):84-88.

3. ICH. Validation of analytical procedures: methodology. Q2B. Federal Register 1996; 62.

4. International Society for Pharmaceutical Engineering. Good practice guide: technology transfer. Tampa (FL): ISPE; 2003.

5. Krause SO. How to ensure efficient execution and auditor approval for validation protocols. Workshop at International Validation Week; 2003 Oct 27-30; Philadelphia, PA. Sponsored by the Institute of Validation Technology (IVT).

6. Krause SO. How to prepare validation reports that will avoid regulatory citations in quality control. Presentation at International Validation Week; 2003 Oct 27-30; Philadelphia, PA. Sponsored by the Institute of Validation Technology (IVT).

7. Krause SO. Good analytical method validation practice, part II: deriving acceptance criteria for the AMV protocol. Journal of Validation Technology 2003; 9(2):162-78.

8. Krause SO. Good analytical method validation practice, part III: data analysis and the AMV report. Journal of Validation Technology 2003; 10 (1):21-36.