TRADITIONAL PROCESS MONITORING: USE OF CONTROL CHARTS
Statistical process control tools are used in a methodology that uses graphical and statistical tools to identify, analyze,
control, and reduce variability in a process.6 The most basic tool is a run chart (or running record). The run chart is a simple graphical tool that is used to record
and display process data over time. A more advanced version of the run chart is the control (or Shewhart) chart, which is
a run chart that also includes upper and lower control limits (UCL and LCL, respectively) and a centerline calculated from
the process data.7 These upper and lower control limits are derived statistically, and provide bounds for the natural variability of the process.
Upper and lower control limits are typically established at ±3 standard deviations above and below an established process
mean; individual points are evaluated against these limits. Thus, control charts use the variability inherent in the process
to assist in determining whether observed variability is likely to be because of chance or the result of some (perhaps as
yet unidentified) process shift or change.
After creating a control chart, a distinction needs to be made between common and special causes of variation. Common causes
refer to the many unknown sources of variation that go into producing a natural variation that is predictable within limits.
Special causes (also known as assignable causes) refer to the sources of variation that are not part of the natural variation.
Special causes of variation should be examined to determine if any data point should be excluded from the analysis because
of a known cause.
Statistical process control is a three-step process. The first step is process trending, in which the first to the 15th data
points are examined for inclusion, and the series of data points is analyzed for trends. The second step is preliminary process
control, in which the first 15 data points are used to calculate a centerline representing the mean of the data and the UCL
and LCL, representing three standard deviations from the mean. All three are displayed on the chart. The 16th to 30th data
points are then plotted to see if each point is within the control limits, and the series of data points is analyzed for trends.
The third and final step is called statistical process control. It is the same process as preliminary process control except
that the centerline and control limits are computed with enough data to be considered statistically significant.
There are two different approaches that are used in monitoring of biopharmaceutical manufacturing processes. The first approach
focuses on data from a single lot. Such data associated with a particular lot are reviewed before releasing that lot. A data
point beyond a control limit for a given process would cause a nonconformance that would have to be accounted for as part
of the lot disposition process. The second monitoring approach analyzes process performance data across lots looking for trends.
When looking for trends, process monitoring uses another statistical process control tool called run rules, also known as
control rules or run tests. In 1956, the Western Electric Company published the first set of run rules in the Western Electric
Handbook, which is today called the AT&T Statistical Quality Control Handbook.8 The rules were based on dividing the control chart into six segments using one, two, and three standard deviations above
and below the centerline. The first four rules defined conditions that indicated that the process was not in statistical control.
The remaining three rules identified deviations and trends within the bounds of the UCL and LCL. In 1984, Lloyd Nelson published
a similar set of run rules in the American Society for Quality's Journal of Quality Technology.9 The Nelson run rules included in-control trends that identified potential problems. These trend rules are very useful in
process monitoring; particularly Nelson rules 1 through 4. The ability of these rules to spot a trend or a deviation is illustrated
in Figure 1.
and Multivariate Exponentially Weighted Moving Average (MEWMA) are the two other approaches that are widely used in the biotech
industry for process monitoring.10 Hotelling's T
monitors individual process observations while MEWMA monitors shifts and drifts in a process. Annamalai, et al., have presented
a case study involving monitoring of eight parameters (harvest volume, harvest amount, cSulf RP–HPLC, B. sepharose recovery, overall recovery, specific activity, peptide map sub-unit percent, and DS rapid acidic C4 RP–HPLC) for a protein
purification process.11 In their case study, the Hotelling's T
identified an unusual production batch during monitoring that would have otherwise gone unnoticed. MEWMA revealed small process
drifts that were previously hidden. Understanding the origins of these drifts provided opportunities to improve the process