Avoiding the Pain of Out-of-Specification Results

Failing to apply important statistical principles leads to unnecessary investigations and wastes time and resources. Control charts can help.
Jun 01, 2008
Volume 21, Issue 6


The pharmaceutical industry does not have a good track record of applying sound statistical principles to the investigation of out-of-specification (OOS) results. Recently, Steven Kuwahara presented an article on the history of the OOS problem highlighting some statistical deficiencies. 1 In this article, we present some additional statistical principles. Failure to apply these principles causes unnecessary investigations, nearly guarantees recurrence, and wastes valuable time and resources. Control charts can help scientists and managers avoid these pitfalls.

In many factories and laboratories, the prevalent mindset is that if the test result under examination remains within specifications, not only does nothing more need to be said about that result, but also that nothing more should be said about it. This fundamental error leads to statistically significant signals that ought to warn of impending trouble being ignored. In turn, these overlooked or disregarded signals and disturbances eventually result in out-of-specification (OOS) results that should have been avoided. They also lead to an unnecessary increase in variation that, in accordance with Little's Law, damages not only quality, but also productivity.2 Any approach that blinds people to statistically valid signals and warnings is poor science. Control charts, which is a method of separating random from non-random variation, can help scientists and managers avoid these pitfalls.


When a product batch is tested and is found to be out of specifications, the prevailing practice in the pharmaceutical industry is to study the faulty batch—independent of previous batches—to determine the cause of the OOS result. This can often lead to erroneous causal relationships, an incorrect determination of the root cause of the OOS result, and corrective action that almost guarantees future recurrences. The result is an increase in operational and compliance costs.

Driving this approach to OOS testing is the "go, no-go" mindset so prevalent in the industry. One real problem with the "go, no-go" mindset is that an OOS result must be recorded before any action is taken. Scientists tend to ignore the statistical signals.

Figure 1. Two views of batch release results: batch release charts with specification and control limits. The top chart is the batch release chart versus specifications. The bottom chart is the same data as a Shewhart control chart.
The top chart in Figure 1 shows the batch results against specifications and the lower chart is a Shewhart control chart (the moving ranges chart is omitted for clarity).

On two occasions, batches have tested as OOS. Because this is an isolated event, the first OOS batch (batch 18) and the circumstances leading to its production should be studied to determine the root cause of the problem and to take corrective action.

However, to take similar action on the second OOS batch (batch 61) would be a mistake. Certainly, it is OOS, and an investigation must take place. Although batch 61 reported as OOS, the actual change to the process occurred at batch 41. The control chart shows two systems with a shift upwards in the process mean at batch 41. The second OOS result is at a random point in a stable system. Studying that batch alone will not reveal the true cause of the failure.

To understand what changed and when, it is better to use a control chart. In this case, after the rise in the process mean at batch 41, it was only a matter of time until a random point fell beyond specifications. There are no statistically significant differences for any of the data after batch 41, including batch 61, which failed release testing.

lorem ipsum