Specificity usually is defined as the ability to detect the analyte of interest in the presence of interfering substances.
Specificity can be shown by spiking known levels of impurities or degradants into a sample with a known amount of the analyte
of interest. A typical testing scheme would be to test a neat sample and a minimum of three different levels of interfering
substances. Several different analysis methods have been proposed to determine specificity; these include percent recovery,
minimum difference from baseline, and analysis of variance. Currently, there are differences in opinion regarding the appropriateness
of using analysis of variance (ANOVA) for showing a difference between baseline and a spiked sample. The goal is not to find
statistically significant differences that have no practical value, but to find statistical differences that have meaningful
implications on assay performance. It is common in clinical diagnostics to use a t-test to assess sensitivity (minimum detected dose or concentration), specifically using a method by Rodbard.5
One proposed method, which combines both the statistical rigor of analysis of variance and the appropriateness of meaningful
differences from baseline, is to use equivocal tests or a method similar to the one used to assess parallelism.6 In this method, the comparison must be within the equivocal zone, though not statistically different. Figure 1 shows four
scenarios; in each of these, the equivocal zone is determined by the distance between –λ and +λ, which is the predetermined
level that is scientifically not different than the target.
1. In scenario 1, the 95% confidence interval (denoted by the horizontal line) contains the target, and the entire 95% confidence
interval is contained in the equivocal zone. In this case, both statistical significance and scientific judgment agree.
2. In scenario 2, the 95% confidence interval does not contain the target; therefore, it would be considered statistically
different, although the 95% confidence interval is fully contained in the equivocal zone. In this case, one would judge the
sample to be scientifically similar to the target.
3. In scenario 3, the 95% confidence interval would lead one to conclude there is no statistical significance, but the 95%
confidence interval is not fully contained in the equivocal zone. Because the variability is larger here, one cannot conclude
there is a statistical difference, but scientifically it is shown to be possibly too large a difference.
4. In scenario 4, neither the 95% confidence interval nor the equivocal zone shows that the sample is equivalent to the target.
In scenarios 1 and 4, both methods agree, whereas scenarios 2 and 3 have some discrepancy. In scenario 2, the precision is
so good that the statistical test fails, although in a practical sense it is similar to the target. Scenario 3 gives the most
confusing conclusion; because the confidence interval is not fully contained in the equivocal zone, one might increase sampling
or perform a retest to attempt to reduce the variability. There is no clear answer for this scenario. Unfortunately, the selection
of the equivocal zone and associated λ value is still being debated in the statistical and scientific community. A potential
compromise is to use some percentage, such as 75% of the specification width, as the equivocal zone for specificity testing.
A minimum of three repeat readings should be taken for each sample; the ideal would be six repeats. Increased repeat readings
of a sample give the analysis of variance more power to detect a difference, if a difference exists.