Several gaps in current regulatory guidelines that govern the analytical method life cycle for the testing of biopharmaceuticals
are identified. Strategic guidance on how to monitor and control the life cycle of an analytical test method is provided.
Analytical method transfer, analytical method component equivalency, and analytical method comparability protocols are discussed
in light of risk-based strategies for validation extensions. The use of an analytical method maintenance program is illustrated
in relation to the predictable risk to patients and firm.
The first part of this article, published in the September 2006 issue, discussed general strategies for validation extensions
to other test method components, laboratories and even different test methods.1 This second part provides practical tips on how to maintain test method suitability long after the formal completion of
analytical method validation (AMV) studies.
Case studies on how to meaningfully derive acceptance criteria for validation extensions and the validation continuum (maintenance)
are described as well as an example on how to reduce analytical variability in validated systems.
ANALYTICAL METHOD MAINTENANCE (AMM)
There are several points to consider when running an AMM programme. Usually, assay control results that are within established
limits (e.g., ±3 s.d.), will yield valid assay results for the test sample(s). Whenever possible, the assay control should
yield a similar assay response when compared with the test sample. Monitoring the assay control will then indicate when unexpected
assay result drifts or spreads may have occurred. Whenever the assay control results are close or over their established limits,
there is a high probability that test sample results are also going in the same direction. This is something that should not
be ignored because it causes predictable, although not exactly measurable, errors in test results (cases 1B and 2B).1
Because production samples are run simultaneously with assay controls, both results should be reported and overlayed on a
single statistical process control (SPC) chart (Figure 1). This will make "non-visible" data elements that lead to cases 1B
and 2A-B, more "visible" and useful for information regarding the true production process performance. Had the process development,
robustness and validation studies been completed properly by using a variance component analysis matrix, the contribution
to overall process variance of effect sampling (timing, number of samples, handling, storage conditions, hold times) could
be estimated. If needed, sampling could be better controlled within the standard operating procedures (SOP). The likelihood
of all cases (1A-B, 2A-B) occurring could then be monitored. This would also allow a much better process understanding as
not only could root causes for process failures be more readily identified, but also more readily measured.
Figure 1. Graphical representation of SPC potencies and "invisible" assay variance.
Furthermore, controlling and fixing process problems could save more batches from rejection. These steps would also be very
much in line with the recently published principles for quality and process analytical technology leading to faster licence
approvals and reduced inspections.2–6
From the robustness studies performed during analytical method development (AMD) and the intermediate precision results during
AMV, the identity of the method component that may be responsible for changing test results when replaced is clarified.