Formal Method Validation - The development and optimization process can improve a method, but validation does not. Validation is the final proof that regulations and expectations are met. - BioPharm

The development and optimization process can improve a method, but validation does not. Validation is the final proof that regulations and expectations are met.

Intermediate precision is often the most important one of all test method performance characteristics evaluated due to the
fact that it represents the to-be-expected laboratory reliability at any given day. This usually reflects the true relationship
between process capability and analytical capability.^{6}

Table 2. AMV execution matrix

It is only necessary to evaluate one product batch to determine intermediate precision in the AMV protocol. We are not evaluating
production process variability, so controllable factors should be held constant to obtain meaningful results for variable
factors (for example, different operators). AMD is the proper time to evaluate several product batches to provide an overall
estimate of the method's capability to stay unaffected by variations in the sample matrix. Batch-to-batch variation can be
readily estimated once analytical (and sampling) variation is known.

Figure 1

The overall intermediate precision validation result (expressed in % coefficient of variation, CV) can be provided to the
production process control unit for production process monitoring. This estimate reflects the expected analytical variability
contributed by the test system over time. However,, the best estimate for analytical capability is provided by the assay control
performance during routine QC operations. If the control material and its testing is similar to the sample (and reference
standard, if applicable) in each assay run, the variation of this control point is the most reliable laboratory performance
indicator and should be used exactly as such in all post-validation activities.

Intermediate precision results can be generated as a full or partial factorial design by rotating operators, days, and instruments
and possibly other critical factors as identified during the AMD studies.^{6} A simple execution matrix design is illustrated in Table 2. An analysis of variance (ANOVA), where results can be grouped
by each operator, day, and instrument, and analyzed in one table, can be used to evaluate this method performance characteristic.
It is advisable to include a numerical secondary limit because the likelihood of observing statistical differences increases
with the precision of the test method and some differences (bias) are normal and should be expected.^{6} An alternative statistical tool is a mixed-linear model analysis.^{7} The mixed linear model analysis is often more practical and useful than ANOVAs. Both statistical methods are accepted by
most regulatory agencies although a mixed linear model analysis appears to be preferred in Europe.^{7} An example for the mixed linear model analysis results using the set up from Table 2 is shown in Table 3 (using the execution
matrix of Table 2).^{6}

Following Tables 2 and 3, the mixed linear model calculations for intermediate precision for each of the variances were performed
as follows: The output variable = Mean + Instrument + Operator + Day + Residual, where:

The mean is a fixed effect indicator variable and denotes the overall mean of all results.

The component Instrument is a random effect variable denoting the effect due to instrument-to-instrument variation.

Identical to above, the components Operator and Day are also both random effects.

Residual denotes a measurement variation not due to any of the above stated effects.^{6}

Table 3. Mixed linear model results for intermediate precision matrix6

An example for an automated ELISA test method is given in Table 3.^{6} The protocol acceptance criteria were: Overall CV to be no more than 15.0% and individual component CV (i.e., "Instrument")
to be no more than 10.0%. Results suggest that operators and days are likely not critical method components. This is preferred
with regards to lesser expectations for the demonstration of operator proficiency because there is a lower risk that test
results are potentially affected when using new operators in the future. There is however a significant component effect observed
among the three different instruments used during the AMV studies (CV = 11.6%). Although somewhat typical of an automated
procedure with minimum operator involvement, it is still something that will significantly contribute to overall assay variability.
Having to deal with the failure to pass this protocol acceptance criterion, we may want to consider evaluating which particular
automation steps for this assay contribute to this variability by dissecting this process into smaller steps. Once "loose"
steps have been identified, we should be able to fix this and decrease instrument-to-instrument variability, and, we should
then be able to pass all protocol acceptance criteria. Similarly, the unidentified residual variation (CV = 11.6%) should
be dissected first by reviewing all AMD data thoroughly. If needed, more detailed component analysis could be done if the
target is to improve the overall precision of 14.6%.^{6}