Process Characterization Essentials, Model Optimization, and Controlling the Process

Published on: 

This article examines how process characterization is crucial to process understanding and then applying that understanding to controlling a process.

The goal of process understanding and characterization is to build a clear line-of-sight from the critical quality attributes (CQAs) to the detailed unit operations and their associated control strategies. Furthermore, it is intended to aid in the understanding and definition of the process. This article is the second part of a two-part series covering in detail the topic of process characterization. In part one (1), the following topics were discussed:

  • CQA selection

  • High-level risk assessment to identify unit operations with risk 

  • Defined process with risk (upstream, downstream, fill/finish)

  • Low-level risk assessment (factors, responses, ranges, and model terms)

  • Design of experiments (DoE) or retrospective analysis 

  • Model building and statistical significance

  • Effect size and critical parameter identification

  • Single model equation generation and documentation.

In part two, the following topics are discussed:

  • Combined model optimization and selection of set point

  • Model verification (confirmatory experimental runs at the optimum)

  • Model calibration to correct for scale or equipment change

  • Design space (documented and filed) 

  • Edge of failure analysis

  • Tolerance design and operational limits

  • Control strategy 

  • Capability and quality metrics.

Combined model optimization and selection of a set point

Figure 1 shows the combined model profiler that is used for the optimization and selection of the set point. The profiler integrates the individual models that were individually created and their associated confidence intervals. It also allows for the definition of the CQAs, key responses or in-process-controls, goals, limits, and importance prior to optimization. If robust optimization is of interest (the lowest transmitted variation), the partial derivatives of the model equation will need to be included in the optimization process. 

Figure 1: Integrated model profiler. (Figures courtesy of the author)

In the purification figure, a single set point was selected to maximize the titer/recovery, minimize the impurity levels and achieve the defined pH. To communicate process understanding, it is helpful to visualize the interactions present in the process. An interaction profiler is recommended per International Council for Harmonization (ICH) Q8 Appendix B, Depiction of Interactions (2) and is shown for the purification example in Figure 2. Interaction profilers help to visualize interactions for each response and multiple factor combinations.

Figure 2: Interaction profiler.


Model verification 


In many cases, the optimized models are in a regime within the characterized design space that has never been run or tested. Therefore, it is crucial to run confirmation tests at the target condition to verify the model and the associated model prediction. It is important to understand the prediction from the profiler is a mean prediction and not a unit (batch) prediction. Model verification tests are confirmatory experimental runs performed at the optimized set point at the scale the experiments were tested. Model verification tests can be run with one to three verification tests. All verification runs need to be within the acceptance criteria.



To set the limits for the model verification, a simulation (see Figures 3 and 4) is utilized from the profiler, n>=100,000; this accounts for the transmitted variation from the model, the root mean squared error (the residual variation after fitting the model) and the variation at set point. A 99% limit (parametric or non-parametric) is then used to set the acceptance criterion for each CQA/response. If the verification run is within the 99% limit, the model has been verified. If outside the limit, the model has not been verified. This same approach can be used to determine if there is a scale effect during scale up. Sample size for the verification run is often one to three runs. Every run must be within the 99% limit to be considered verified.

Figure 3: Profiler with simulator.

Figure 4: 99% acceptance limits for model verification.

Model calibration  to correct for scale


Once the set point has been selected and has been verified, the model may need to be further calibrated to account for any scale effect that is due to a change in scale, equipment, or location (3). Once the scale has changed or the equipment has been changed, the model verification check is used to see if there is a scale effect. If so, the model needs calibration. 

Calibration may be a single-point calibration (mean shift), a two-point calibration (slope and intercept), or even a three-point calibration (slope, intercept, and quadratic). The model was -developed at small scale and then calibrated to the at-scale process so that predictions and communication of process knowledge are meaningful to the good manufacturing practice (GMP) scale to be used. It is not useful to have a 2-L reactor model that DoEs not match the 2000-L reactor’s performance.


Design space  (Documented and filed)


There are two kinds of design space: documented and filed with the health authorities in licensure or submissions. ICH Q8 defines the design space as follows (1): 

“The multidimensional combination and interaction of input variables (e.g., material attributes) and process parameters that have been demonstrated to provide assurance of quality. Working within the design space is not considered as a change. Movement out of the design space is considered to be a change and would normally initiate a regulatory post-approval change process. Design space is proposed by the applicant and is subject to regulatory assessment and approval (ICH Q8).”

Every study (DoE or retrospective) generates a design space and should be documented in a development report. The design space shows visually how change in the mean impacts quality attributes and may indicate a safe operating regime. What is not often clear is the purpose of the design space: to clearly define limits where the process may be adjusted to achieve improved control. To have a filed design space the following is needed: transfer function; adjustable limits to be used for process controls; and control logic that explains how the design space will be used to regulate/control the process. Design space does not reflect unit variation at set point (see Figure 5).

Figure 5: Design space.


Edge of failure analysis


Edge of failure is used to visualize and to set operational ranges on factors that influence CQAs, in process controls, or productivity parameters. Examples of edge of failure analysis were created using SAS/JMP (4). Further, it helps to understand the batch/unit variation rather than the average in the design space (see Figure 6).

Figure 6: Edge of failure plots.


Tolerance design and operational ranges


ICH Q6B provides guidance on tolerance design and setting specification limits (5), ICH Q8 discusses the need for defining normal operating ranges (NORs) and proven acceptable ranges (PARs) for unit operations. From the DoE and simulation, it is possible without running additional tests to define the NOR and PAR ranges of the process. Generally, as long as the operational range of the characterized design space is used in setting NORs and PARs, the simulation will provide sufficient assurance of quality without running additional NOR and PAR studies. Care should be made not to extrapolate the sensitivities from the model, it is a more risk-adverse strategy to interpolate within the characterized design space. Minor extrapolation (less than 50% of the characterized range) may be possible; however, it should be the exception and verification testing is recommended.  

Tolerance design sets the limits from the characterized process to ensure a safe operating regime for the process.  It considers all relevant CQAs and in process controls (see Figure 7).

Figure 7: Edge of failure with tolerances.


Control strategy 


Another direct benefit of a well-characterized process is the ability to define a clear control strategy. ICH Q8 states the following regarding control strategy (1):

“The elements of the control strategy discussed in Section P.2 of the dossier should describe and justify how in-process controls and the controls of input materials (drug substance and excipients), intermediates (in-process materials), container closure system, and drug products contribute to the final product quality. These controls should be based on product, -formulation and process understanding and should include, at a minimum, control of the critical process parameters and material -attributes.”



The following are methods of control that need to be addressed in Section P.2:

  • Drug product/drug substance release tests for product control

  • In-process controls/monitoring 

  • NOR and PAR limits for input parameter control

  • Qualification testing and limits for raw materials and intermediates

  • Closed loop process controls with adjustable design space

o Feedforward controls
o Feedback controls
o In-situ monitoring/control
o In-control-action-plan to detail how to adjust the process
o Control charts and monitoring for the method and process.

The four elements of a closed loop control plan are:

  • Sensor design using a control chart, analytical method or probe, and sampling plan

  • Alarms to indicate the process is off target and trending

  • Control logic extracted from the process model/DoE, an in-control action plan that details how to adjust and how much and all adjustments are within the filed design space

  • Verification the process is back in control.

Capability analysis and quality metrics

The final measure of process characterization and control are the quality metrics. FDA plans on using a risk-based approach based on quality metrics to monitor and audit drug maker manufacturing facilities and processes (6). There are four primary metrics that should be used for continuous process verification: mean (deviation from target), standard deviation to measure variation over time, lot failure rate actual for each CQA and combined for multiple CQAs, and lot failure rate predicted in parts per million (PPM) for each CQA and aggregated for all CQAs. Cpk is not recommended as it has no direct conversion to a failure rate, is not additive and can be misleading if distributions are not normally distributed. All CQAs should have PPM levels of less than 100 to be considered performing well.

During process validation (7), the design space, NOR/PAR ranges, and all process controls need to be defined and demonstrated to control of key sources of variation in the process to achieve batch consistency with high batch release success rates.


Process characterization, model building, design space, and control strategies are essential skills and required for modern drug development. Linking CQAs, risk assessment, analytical methods, DoE design, and process understanding are skills that must be nurtured and applied within the development team. Process characterization is critical to generating and communicating process understanding and then applying that understanding to controlling the process. Understanding how to build process controls in a biological process is the essence of modern drug development and subsequently manufacturing.


1. T.A. Little, BioPharm International 30 (3) 46-52.
2. ICH, Q8(R2) Pharmaceutical Development (ICH, 2009).
3. T. Little, BioPharm International 28 (10) (October 2015).
4. SAS/JMP 13.2, 
5. ICH, Q6B, Specifications: Test Procedures and Acceptance Criteria or Biotechnological/Biological Products (ICH, March 1999).
6. FDA, Quality Metrics,
7. FDA, Process Validation: General Principles and Practices, January 2011.