LINEARITY AND RANGE
The linearity of an analytical procedure is its ability, within a given range, to obtain test results that are directly proportional
to the concentration of analyte in the sample. De facto, the range is the smallest and largest concentration that maintains
a linear relationship between the concentration and the response of the method. The ICH guidelines do not require any proof
of precision, though it is clear that without sufficient precision, the linear relationship cannot be guaranteed. The most
common method used for demonstrating linearity is least squares regression. Sometimes it is necessary to transform the data
to get a linear fit. The guidelines recommend a minimum of five dose levels throughout the range, with each tested for a minimum
of three independent readings. It is possible to use these samples to test the accuracy of the method. Accuracy is the lack
of bias at each level, and as long as the bias is consistent along the range of the assay, the method is considered linear.
Residual analysis, or the observed value minus the predicted (from the linear equation) can help to assess if there is sufficient
linearity in the data. Daniel and Wood give an excellent explanation of residual analysis.7
Accuracy is the difference between the measured value and the true value. This is different from trueness, which is the difference
between the mean of a set of measured values and the true mean value. Accuracy is usually presented as a percent of nominal,
although absolute bias is also acceptable. Accuracy in the absence of precision has little meaning. Accuracy claims should
be made with acceptable precision. ICH guidelines suggest testing three replicates at a minimum of three concentrations. If
the same data from the linearity experiment are used, then there would be five levels. Precision of the data should be compared
to observed precision from previous studies or development runs, to confirm the observed precision and validity of the accuracy
ICH guidelines recommend using confidence intervals for reporting accuracy results. Confidence intervals are used for probability
statements about the population mean—for example, that the average percentage recovery should be 95–105%.
Tolerance intervals can be used to set appropriate accuracy specifications. These say, for example, that no individual percentage
recovery should be less than 80% or greater than 120%.
Tolerance intervals make a statement about the proportion of the population values with a fixed confidence. Therefore, one
would say that x% of the population will be contained in the tolerance limits with y% confidence. Tolerance intervals are computed from the sample mean and sample standard deviation. A constant k is used such that the interval will cover p percent of the population with a certain confidence level. The general formula for a tolerance interval is: x-mean ± kS
Values of the k factor as a function of p and percent confidence are tabulated in Dixon and Massey.8
The most important part of any analytical method validation is precision analysis. The ICH guidelines break precision into
two parts: repeatability and intermediate precision. Repeatability expresses the precision under the same operating conditions
over a short interval of time. Repeatability is also termed intra-assay precision. Intermediate precision expresses within-laboratory variations: different days, different analysts, different equipment,
etc. Additionally, the ICH Q2A guideline defines reproducibility as the precision among laboratories (collaborative studies,
usually applied to standardization of methodology).4 This lab-to-lab precision could be combined into the estimate of intermediate precision, because it is possible that a particular
test method could be run in more than one laboratory. The suggested testing consists of a minimum of two analysts on two different
days with three replicates at a minimum of three concentrations. If lab-to-lab variability is to be estimated, the experimental
design should be performed in each lab. The analyst and day variability combine to give the intermediate precision (lab-to-lab,
if estimated, is added here), whereas the variation after accounting for the analyst and day is the repeatability.
Variance components, or decomposition of variance, is a statistical method to partition the different sources of variation
into their respective components. Statistical programs such as Minitab are commonly used to calculate variance components.
In Minitab, the option to calculate variance components is contained in the analysis of variance (ANOVA) menu option. Box,
Hunter, and Hunter provide an excellent source for additional information on how to calculate variance components.9 It is important to remember that variances can be added or averaged, but not the standard deviations.