The pharmaceutical industry does not have a good track record of applying sound statistical principles to the investigation of out-of-specification (OOS) results. Recently, Steven Kuwahara presented an article on the history of the OOS problem highlighting some statistical deficiencies. 1 In this article, we present some additional statistical principles. Failure to apply these principles causes unnecessary investigations, nearly guarantees recurrence, and wastes valuable time and resources. Control charts can help scientists and managers avoid these pitfalls.
When a product batch is tested and is found to be out of specifications, the prevailing practice in the pharmaceutical industry is to study the faulty batch—independent of previous batches—to determine the cause of the OOS result. This can often lead to erroneous causal relationships, an incorrect determination of the root cause of the OOS result, and corrective action that almost guarantees future recurrences. The result is an increase in operational and compliance costs.
Driving this approach to OOS testing is the "go, no-go" mindset so prevalent in the industry. One real problem with the "go, no-go" mindset is that an OOS result must be recorded before any action is taken. Scientists tend to ignore the statistical signals.
On two occasions, batches have tested as OOS. Because this is an isolated event, the first OOS batch (batch 18) and the circumstances leading to its production should be studied to determine the root cause of the problem and to take corrective action.
However, to take similar action on the second OOS batch (batch 61) would be a mistake. Certainly, it is OOS, and an investigation must take place. Although batch 61 reported as OOS, the actual change to the process occurred at batch 41. The control chart shows two systems with a shift upwards in the process mean at batch 41. The second OOS result is at a random point in a stable system. Studying that batch alone will not reveal the true cause of the failure.
To understand what changed and when, it is better to use a control chart. In this case, after the rise in the process mean at batch 41, it was only a matter of time until a random point fell beyond specifications. There are no statistically significant differences for any of the data after batch 41, including batch 61, which failed release testing.