OR WAIT null SECS
Motorola recognized that vartion is the death knell of any process, so the company established a methodology called Six Sigma
The pharmaceutical industry's recent emphasis on continuous improvement, operational excellence, and process analytical technology has motivated us to evaluate the basic tenets of our approach to quality. Historically, the ability to ensure that a drug meets its intended form, fit, and function has been achieved through the application of the quality infrastructure, i.e., standard operating procedures, policies, specifications; qualification or validation, i.e., commissioning, installation qualification (IQ), operational qualification (OQ), performance qualification (PQ), process validation; and testing, i.e., in-process and final release. However, despite these processes, the number of drug recalls continues to rise, escalating from 176 in 1998 to 354 in 2002, according to the US Center for Drug Evaluation and Research.1
The use of regulations as a primary means of ensuring product quality began to decline in early 2000, when industry pushed back on FDA's Part 11 compliance requirements for electronic signatures and electronic data exchange, challenging the cost and effort associated with implementation, versus the actual benefit to product quality. Today, however, industry and regulatory agencies are moving toward a more scientific approach to ensuring product quality.
The International Conference on Harmonization (ICH) Q8 and Q9 guidance documents2,3 , for example, define a scientific approach to process characterization, advocating a quality by design framework. Risk management is an integral part of this approach.
Similarly, the US FDA's "GMPs for the Twenty-First Century" initiative focused on quality by design, risk management, continuous process improvement, and quality systems. Rolled out in 2004, this initiative challenged industry's traditional approaches to ensuring product quality by encouraging employees to look beyond traditional inspection methodologies for ensuring product performance. The early process and product characterization emphasized in the quality-by-design and risk-management approaches do not inherently conflict with validation. On the contrary, by deepening the level of scientific understanding of a manufacturing process, the approaches ensure that a process is well understood before it is considered "validated." Methods that involve continuous improvement and real-time control, however, do pose a significant question: Are these quality methods inconsistent with the basic tenets of validation that have served as the backbone of the industry's quality structure for so many years? Once you have "validated" a manufacturing process, how much can you improve it—through real-time control or any sort of continuous improvement step incorporated into Lean, Six Sigma, etc.—without having to file manufacturing supplements with FDA? How much of an impediment are those filing requirements?
The challenge of validation is that it has been viewed as a necessary evil—a regulatory activity that cannot be avoided when manufacturing regulated products. The effort and cost associated with validation continue to escalate as industry and regulatory groups increase their understanding of pharmaceutical processes and identify an increasing number of process variables that must be controlled. Biotech adds another layer of complexity by introducing the qualified pilot or intermediate-scale model as an integral component of the validation equation.4
The prohibitive cost of characterization studies at full scale requires us to establish clear, scientific arguments to show how process development studies relate to full-scale validation lots. The complexity of biotech processes demands an even higher level of scientific argument. As we increase our understanding of biopharmaceutical processing, the value associated with traditional validation diminishes, and industry responds accordingly.
The integration of equipment validation and process validation provided incentive to measure the capability of our processes and analytical methods. However, somewhere along the way, the incentive for validation shifted from a need to measure processes, to a need to satisfy a regulatory requirement as quickly and as cheaply as possible.
Over time, industry came to believe that validation had to include a broader range of equipment and processes and a greater level of detail, and as a result, validation costs went up. In response, the industry attempted to distribute the responsibility for validation among participants in the quality process. For example, industry suddenly decided that validation had to include commissioning activities and engineering pre-cursor activity to equipment qualification, so they started requiring that contractors and subcontractors test and document various aspects of IQ. The approach of requiring increased involvement from vendors also extended to factory acceptance tests. Such tests—which have ranged from simple vendor testing and certification to constructing simulator panels to mimic the actuation of automated components—have also ranged in their true relevance to the validation process.
Market drivers completely unrelated to the field of validation often have determined the amount of effort put into validation. For example, when equity markets dried up in the late 1990s, emerging biotech companies shifted their emphasis from scientific investigation to bringing product to market as quickly as possible. The industry looked for cheaper and faster ways to push through the validation process to move programs forward quickly. The result was simpler process validation studies that focused on building three validation lots to demonstrate process predictability, rather than focusing on true process understanding. Likewise, companies began buying more equipment from suppliers who offered "canned" validation protocols that could be purchased and implemented, rather than developing their own protocols to challenge the equipment and thus increase the probability the equipment would meet the needs of the process. The implication of these shifts was that validation was necessary, but not essential to sound process development.
This short-cut approach to validation resulted in processes that were less stable at the commercial scale. FDA's recent revelations about high-profile, approved products that may be unsafe, such as Vioxx and Serevent, and Congress's pressure on industry to find ways to reduce the cost of drugs to the general public, have impacted both Big Pharma and biotech. In response, the industry has recognized the need for a better way to reduce process and product risk.
The answer was a shift to a more scientifically driven development approach, often referred to as "Operational Excellence," or "Process Excellence." This approach integrates process, quality, and business requirements to promote the science of development.
These quality initiatives integrate Six Sigma, Lean Manufacturing, Kepner-Tregoe, Theory of Constraints, Design of Experiments, and Balanced Scorecards to establish process understanding. These methodologies emphasize the need to objectively define, measure, and characterize critical variables that affect a process. While testing and data collection are integral components, verification is the final culmination of the quality assessment—not the basis of quality.
Looking closely at these approaches, however, reveals that they based in a large part upon an approach that has been integral to our quality systems for over 70 years—Walter Shewhart's cycle of Plan, Do, Check, Act (PDCA).
Walter Shewhart, an enterprising statistician who worked at Bell Laboratories in the US during the 1930s, developed the science of Statistical Process Control. An offshoot was the PDCA Cycle, often referred to as "the Shewhart Cycle." This tool was adopted and promoted from the 1950s on, by W. Edwards Deming, the renowned quality management authority, and as a result the tool also became known as "the Deming Wheel" (Figure 1).
Figure 1. PDCA "The Shewhart Cycle"
The PDCA Cycle was the first tool broadly adopted as a framework for continuous improvement. PDCA is a four-step quality improvement cycle that promotes continuous improvement based on the method of design (plan), execution (do), analysis (check), and evaluation (act). Sometimes referred to as plan/do/study/act, the cycle emphasizes the constant attention and reaction to factors that affect quality.
The chief advantage of the PDCA cycle—flexibility in moving through each phase of the cycle—is also its biggest challenge, because it left the door open for subjectivity. Subjectivity has long been the downfall of our industry. Without a clear vision for success or a defined method for evaluation, the potential exists to rely on unscientific process development and characterization activities, which can lead to incorrect or incomplete conclusions. For example, univariate analysis methods—often called One-Factor-at-a-Time (OFAT) studies5 —have been the backbone of the small-molecule pharma industry, as well as the biopharm industry. Such studies, however, do not possess the power to fully characterize a process. The result is a false sense of security that the process characteristics are understood.
An analogy would be that of trying to solve the popular "Rubik's Cube" puzzle. It may be relatively simple to get one side of the cube all one color, thus providing the impression of progress towards your goal. However, the reality is that you are actually further from success than when you started the exercise (Figure 2). Because of these limitations, other industries abandoned the OFAT approach 30 years ago, deeming it ineffective for process characterization and verification.
Figure 2. Cube Plot for Protein Recovery
The biopharmaceutical industry, too, has come to recognize that the OFAT approach is insufficient. The industry has also realized that to be successful in combining quality, technical, and business requirements in the drug development lifecycle, it must realign not only its scientific approach to process understanding, but also its thinking within the organization. As a result, Operation Excellence initiatives have moved to frameworks such as Six Sigma to provide a roadmap that can meet this need.
In 1986, Motorola established a framework designed to integrate quality, process, and business requirements into the product development lifecycle. Motorola recognized that variation is the death knell of any process, so the company set out to establish a methodology to identify and eliminate variation. They called this approach Six Sigma6 (Figure 3).
Figure 3. Six Sigma as an organizational development and leadership tool
In the late 1990s, CEOs Jack Welch from GE and Larry Bossidy from Allied Signal adapted the Motorola model to a set methodology called the DMAIC roadmap. DMAIC is an acronym for Define, Measure, Analyze, Improve and Control. These are the five phases necessary to measure, characterize, and control a process (Figure 4).
Figure 4. The DMAIC Roadmap
Within each step of the road-map, a defined set of tools is applied. Each phase in the DMAIC process is intended to guide the members of an improvement team through the project in a manner that provides relevant data and in-depth process understanding. The DMAIC project management approach allows businesses to make the best possible decisions with the available data and resources. The five-steps of the DMAIC process are as follows:
1. Define: Clearly define the problem and relate it to the customer's needs (generally, with a cost benefit to the organization identified).
2. Measure: Measure what is key to the customer and know that the measurement is good.
3. Analyze: Search for and identify the most likely root causes.
4. Improve: Determine the root causes and establish methods to control them.
5. Control: Monitor and make sure the problem does not come back.
Within each DMAIC phase, there is a set of deliverables that must be completed to ensure all project requirements are met. A summary of the deliverables and typical activities for each phase of the DMAIC process is shown in Table 1.
Looking closely at the tools within the DMAIC methodology reveals elements that have been part of the quality toolkit since its inception. Cause and effect diagrams, Failure Mode and Effects Analysis (FMEA), and process capability analysis, among others, have been used broadly by process and quality engineers in multiple industries for years. What separates the DMAIC roadmap from the isolated application of these individual tools is the methodology around the application of the tools. In DMAIC, the process evaluation is based on the objective acquisition and analysis of data, in lieu of representative testing and inference.
Although Six Sigma and the DMAIC toolkit focused on eliminating process variability, there still remained the need to bring products to market faster and more cheaply. As a result, the biopharmaceutical industry has turned to the principles of Lean Manufacturing to increase the efficiency of our processes. The ideas of Lean Manufacturing are based on the Toyota Production System approach of eliminating waste in every aspect of a company's operation. Lean focuses on time variability, in contrast to Six Sigma's focus on process variability. In their book Lean Thinking, Jim Womack and Daniel Jones7 recast the principles of Lean into five principles:
1. Value: Every company needs to understand the value customers place on their products and services. It is this value that determines how much money the customer is willing to pay for them. This analysis leads to a top-down, target-costing approach that has been used by Toyota and others for many years. Target costing focuses on what the customer is willing to pay for certain products, features, and services. From this, the required cost of these products and services can be determined. It is the company's job to eliminate waste and cost from the business processes so that the customer's price can be achieved at great profit to the company. In the biopharmaceutical and pharmaceutical world, value is often associated with quality and data, rather than with standard cost.
2. Value Stream: The value stream is the entire flow of a product's lifecycle, from the origin of the raw materials used to make the product through to the customer's cost of using, and ultimately disposing of, the product. Only by studying and obtaining a clear understanding of the value stream (including its value-added and waste) can a company truly understand the waste associated with the manufacture and delivery of a product or service.
3. Flow: One significant key to the elimination of waste is flow. If the value chain stops moving forward for any reason, then waste occurs. The trick is to create a value stream in which the product (or its raw materials, components, or sub-assemblies) never stops in the production process, because each aspect of production and delivery is in harmony with the other elements. Carefully designed flow across the entire value chain will minimize waste and increase value to the customer. Achieving this kind of flow is a challenge in our industry because many of our processes are batch processes. Even so, within the context of the total value stream, there are significant opportunities to move towards continuous flow.
4. Pull: A traditional Western manufacturer uses a style of production planning and control whereby production is "pushed" through the factory based upon a forecast and a schedule. A pull approach dictates that we do not make anything until the customer orders it. To achieve this requires great flexibility and very short cycle times of design, production, and delivery of the products and services. It also requires a mechanism for informing each step in the value chain what is required of them today, based on customers' needs.
5. Perfection: A lean manufacturer sets perfection as a target. The idea of total quality management is to systematically and continuously remove the root causes of poor quality from the production processes so that the plant and its products move toward perfection. This relentless pursuit of the perfect is the key attitude of an organization that is "going for lean."
Lean has been enthusiastically embraced by our industry because the tools are simple and improvement can be realized quickly. Although Lean is often initiated because of cost or efficiency reasons, there is another perspective to Lean that is often overlooked: quality.
Our industry should think of Lean as a quality initiative—not a business-driven one. While it is true that the basis for Lean is to eliminate waste and maximize the value-added activities of a process, another benefit of Lean is the way it simplifies and standardizes the process. The result is improved predictability. If you map the DMAIC and Lean tools together against the Shewhart PDCA Cycle, you find they follow the same framework; the tools within both toolkits are designed to address the same basic requirements of the PDCA cycle (Figure 5).
Figure 5. DMAIC and Lean tools deployed in the Shewhart Cycle
Mapping validation, as applied by the biopharmaceutical industry today, may seem inconsistent with the principles of the Shewhart PDCA Cycle, DMAIC, and Lean. The basis of traditional validation is verification against predetermined acceptance criteria. How-ever, if we divide the validation process into its components, there is more similarity than difference between validation and these improvement methods. The steps of the validation life cycle map well to the Control, Measure, and Analyze phases of the DMAIC roadmap. What is missing is the Improve stage.
Table 1a. Summary of DMAIC phase deliverables (continued)
Six Sigma and Lean principles are predicated on the absolute requirements of demonstrating that the process is in control. By building on an efficient and objective framework for characterizing, measuring, and optimizing a process, it is possible to achieve a level of confidence that the process will be predictable and reproducible. No amount of testing will ever approach this level of confidence; heightened testing and large sampling can still only infer the process is in control. (As many have said, you cannot test quality into the product.) The irony in applying validation to the PDCA model is that its efficacy is only as good as one's understanding of the key process input variables that steer the process. In the absence of this, validation degenerates to a paper exercise.
Table 1b. DMAIC phase deliverables
The twenty-first century GMP initiative advocates the need for building process understanding throughout the process development lifecycle. Tools such as Six Sigma, DMAIC, and Lean Manufacturing provide a framework for objective characterization and analysis of a process's key parameters. This knowledge, coupled with a quality system framework for specification, in-process, and release testing, can significantly elevate the level of quality built into the final product or process. While at first glance validation might appear to be inconsistent with these improvement initiatives, the elements of the validation lifecycle map to the control, measure, and analysis phases of the PDCA lifecycle. The most effective application of validation is achieved by using these optimization tools in the process characterization and development phases of a process long before validation. Until characterization and evaluation frameworks are more fully integrated into the drug development lifecycle, validation will remain a costly and time-consuming exercise capable only of providing limited assurance of process and product stability.
Bikash Chatterjee is the chief operating officer of Pharmatech Associates, 1098 Foster City Blvd., Foster City, CA 94404; tel 650.227.0177 fax 650-227-0176; email@example.com
1. US Food and Drug Administration, Center for Drug Evaluation and Research, http://fda.gov/CDER.
2. International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use, ICH Harmonized Tripartite Guideline, Pharmaceutical Development Q8, November 2005.
3. International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use, ICH Harmonized Tripartite Guideline, Quality Risk Management Q9, November 2005.
4. Gibson M. Technology Transfer: An International Good Practice Guide for Pharmaceutical and Allied Industries. Illinois: DHI Publishing, 2005.
5. Schmidt SR, Launsby RG. Understanding Industrial Designed Experiments, 4th Ed. Colorado: Air Academy Press, 2000.
6. Brefogle FW. III. Implementing Six Sigma. New Jersey: Wiley and Sons, 1999.
7. Womack JP, Jones DT. Lean Thinking. New York: Simon & Schuster, 1996.