Therefore, when confronted with an OOS test result—whether it is a single number or an average—the logical next step is to
perform a retest.
The reason for this is that in some situations, sample limitations, or questions about the validity of the sample create a
need to perform the retest on a new sample from the lot. In such cases, the analyst should conduct the retest under conditions
under which the retest can be considered a legitimate test of the lot. Therefore, there should be no reason to believe that
the retest is not as valid as the original test. If the result of the retest confirms the initial OOS conclusion, the analyst
should accept the failure and reject the lot.
If the retest shows a passing test result, however, the analyst is in a quandary, because both results are equally valid.
The situation in which the analyst is faced with opposite conclusions from equally valid results is common when the initial
OOS result is caused by extreme statistical variation. The experienced analyst knows that it is necessary to conduct additional
retests to reach a high level of confidence that the lot is really acceptable.
The OOS problem arose because of companies whose managers would immediately accept a passing result and discard a previous
failing result when there was no scientifically defensible reason for doing so. This ended up in court, in the case of United States v. Barr Laboratories.
THE BARR DECISION
In 1993, Barr Laboratories was sued by the US government (i.e., the US Food and Drug Administration) regarding a whole set
of issues, including the way the company dealt with OOS results.3–5 Barr lost and the judge who heard the case, Judge Wolin, issued a ruling commonly referred to as the Barr Decision. The Barr Decision made the OOS problem into a major problem for the QC laboratory by creating a regulatory requirement where,
following an OOS result, an investigation must be initiated before any retesting can be done. Consequently, OOS results arising
from random variation must be investigated before actions (i.e., retesting) can be taken to decide whether or not it is a
random event. This creates additional work for QC laboratories, and an intense desire to simplify investigations by blaming
OOS results on laboratory error. This can create situations in which retesting leads to testing a product into compliance
with repeated claims of laboratory error. An outside observer might conclude that the laboratory is not competent to be performing
Most QC supervisors who have received basic statistical training know that statistical formulas can be used to calculate the
proper number of replicates needed to overcome a single failing result. The number of replicates is based on previous data
concerning the variability of the product and test method. In the Barr Decision, however, the judge offered the opinion that
seven passing results are needed to overcome one OOS result. This caused a number of companies to adopt a "seven replicate
rule" when confronted with an OOS test result. This procedure and the testimony that originally led to the judge's conclusion
were completely without scientific foundation.
THE 1998 DRAFT GUIDANCE THAT FOLLOWED THE BARR DECISION
Following the 1993 Barr Decision, the OOS problem was formalized by FDA in a draft guidance document, Out of Specification (OOS) Test Results for Pharmaceutical Production, issued in September 1998.6 Although it was a draft document, for many years it was the only guidance document available on this subject.
Like the Barr Decision, the FDA's draft OOS guidance document required that any single OOS result must be investigated. The
guidance also introduced procedures for investigating OOS test results. It made clear recommendations for the actions that
should be taken during initial laboratory investigations and formal investigations (with reporting requirements) for situations
in which the OOS result cannot be attributed to laboratory error. The recommendations detailed the elements required for the
investigations and the reports that would be generated. The responsibilities of the analyst who obtains an OOS test result
and that analyst's supervisor were described.
In addition to the investigation requirement, the guidance document incorporated many other elements of the Barr Decision.
Thankfully, it did not perpetrate the erroneous idea of seven passing test results overcoming one failing result. Unfortunately,
other odd ideas, particularly related to averaging, were maintained.