Making Design Validation Effective

Published on: 
BioPharm International, BioPharm International-03-10-2005, Volume 2005 Supplement, Issue 1
Pages: 40–45

The purpose of design validation is to demonstrate that a product performs as intended. The usual route to this goal is showing that every item on the specification has been achieved, but it is not an easy path. The specification itself can create difficulty if it includes statements like "as long as possible" or the real horror "to be decided." Verification tests can reveal so many problems that the design must change to such an extent that earlier tests are no longer relevant. And there is also the practical difficulty of obtaining sufficient samples to test when the manufacturing engineers have not completed their standard operating procedures, the product design is not fixed yet, the component suppliers are late, and the marketing department has taken all the samples to show to prospective customers.

The purpose of design validation is to demonstrate that a product performs as intended. The usual route to this goal is showing that every item on the specification has been achieved, but it is not an easy path. The specification itself can create difficulty if it includes statements like "as long as possible" or the real horror "to be decided." Verification tests can reveal so many problems that the design must change to such an extent that earlier tests are no longer relevant. And there is also the practical difficulty of obtaining sufficient samples to test when the manufacturing engineers have not completed their standard operating procedures, the product design is not fixed yet, the component suppliers are late, and the marketing department has taken all the samples to show to prospective customers.

Design validation is not just a test tacked on to the end of development. It is most successful when it is an integral part of an effective design and development process. Thorough design validation combines effective testing with a well planned development strategy. As with other types of validation, design validation is associated with jargon and technical terms that have different meanings for different people. This introduction explores general principles that must be adapted to the needs of a company, product, or team.

Product Specification

The product specification is the foundation of design validation. It is vital that it is clear and well-structured since the validation must show that everything it contains has been achieved. A hierarchical, top-down specification that begins with the needs of the users and ends with process tolerances is recommended.

Benefits to the user. The specification should contain a statement that the product delivers the right amount of drug in the right form to the right place. It also often includes statements about ease of use, environmental conditions, labeling, and cleaning. The critical issue is quantifying these statements; otherwise it is not possible to validate them. For example, "easy to use" is no help at all. Strict validation requires that customers (or at least representative people) attempt to use the product and their success and opinions are documented.

Performance. Product performance specifications begin to convert the user needs into engineering values. For example, specifications for a self-injection device would include the depth of penetration of a needle, the toughness and hardness of the skin, and the delivery time for the drug.

Table 1. Design Validation and the Development Process

Reliability. Reliability is a huge topic in its own right and a difficult one. A product that is used over a long period will have an expected lifetime, a failure rate during use, and a failure proportion for early-life defects. A one-shot device is characterized by a success probability after a given storage time. Environmental conditions for use, storage, and transport affect reliability. Some products can use the "Martini specification" — any time, any place, any where.

4. How it works. This part of the specification describes the product's specific characteristics and defines the engineering parameters that ensure the product meets its performance specification. During the design and development process, this part of the specification grows to include details of critical components, dimensional tolerances, and process conditions.

The Design Validation Process

Advertisement

Prevention of problems.

Failure modes and effects analysis (FMEA) is a standard tool for risk assessment. It should be used early in the development process at the system level to try to foresee problems such as those a user might experience.

FMEA should also be used during development to anticipate design errors that could have serious consequences but which are unlikely to be discovered before manufacture commences. Critical factors could be the choice of plastic for a component, corrosion resistance of a spring, or estimates of mechanical loads on a specific part.

Development trials. Exploratory development trials used to improve a designer's understanding of how the product works are an essential part of the development process but have no value for validation. However, development trials that follow a protocol and are written up in a report may save work later. When early drafts of a specification include phrases like "to be decided," feasibility trials may show what performance levels can be expected — for example, the lifetime of a hinge, the number of operations before cleaning is needed, and the fluid pressure at the tip of a needle.

Design reviews Formal design reviews are conducted by the design team with assistance from others who can bring a fresh view and challenge assumptions. One meeting may be enough for a simple product but reviews for mechanical, electrical, software, and system design can take up many days. The design review considers the FMEA, development trials, design calculations, and decisions that have been made, all with a particularly critical eye.

Verification tests. These are the principal element of design validation. Some of these tests are performed on complete product, but others may be done on components or sub-assemblies. This is especially relevant to long-term reliability where a pump or motor may be tested in isolation to demonstrate that its lifetime is sufficient. The tests should include the effects of variation on performance, including variation which comes from tolerances in manufacture and from the product's environment. The variation can be allowed to occur naturally by using many people as test subjects, or it may be simulated by deliberate control of key characteristics, such as the viscosity of a drug or the storage temperature.

If development tests have already established that the product's performance is satisfactory, it may be possible to write a simple "substantial equivalence report." This justifies using the results in lieu of a verification test. A substantial equivalence report can be used if the protocol for the verification test would be little different from the development tests and if the design has not changed significantly.

The test protocols and pass limits are approved before the tests commence. Any deviations from the protocol must be agreed upon, and a list of discrepancies or failures must be maintained. If the development work was thorough, there should be few discrepancies; the purpose of the tests is confirming that the performance is within specification.

Discrepancies can be resolved with design changes, but this must be accompanied by an analysis showing how other elements of the performance specification might be affected. In many cases, this analysis must be followed by repeating some verification tests. If the number of discrepancies and design changes increases, it may be necessary to redefine the verification tests as development trials and commence a new verification.

Some discrepancies can be resolved by simply amending the specification. In theory this should not happen if a top-down approach was taken in creating the specification. In reality, part of the performance specification often is written after feasibility studies, which were performed under ideal conditions on a laboratory bench. A test unit using production parts under "worst case" conditions can fail if the effects of the tolerances are not considered.

A third way to resolve discrepancies is identifying assignable causes of failure — the protocol was not followed, there was a power failure, a test lead broke. However, there is a risk that these causes become a series of excuses that are applied until a passing result is obtained.

Validation tests. Validation tests are performed when it is not possible to objectively measure performance. They are applied, for example, to the ease of assembly of a device, the legibility of labels, and the instruction manual. A team of about 10 people is selected with the requirement that they are representative of the intended users and are not familiar with the product. This excludes the design team and many engineers and managers in the quality, production, and marketing departments. The team follows the validation protocols and their subjective assessments are recorded and compared with the pass limits. The criteria for deviations and discrepancies that apply to verification tests also apply to validation.

Post-market surveillance. Information from real users also must be collected, but it is not part of the formal design validation for most products. (Clinical trials are a different matter entirely.) Questionnaires and interviews provide the best feedback about ease of use and operability and are used to confirm that the verification and validation tests were an effective surrogate for actual use.

With some products, post-market surveillance includes reclaiming and testing units from users. These tests reveal early information about potential reliability problems that could have been missed in laboratory trials. Post-market surveillance is not a required part of design validation, but it reduces commercial risks and engineering costs.

The Development Process

Product development can be conceived of in stages: concept feasibility, prototype development, pre-production, and pilot production. The time and relative effort expended at each stage depends on the type of product, but the design proceeds in parallel with the development of the manufacturing processes. In particular, some components, such as plastic injection moldings, often force the alignment of product, process, and validation schedules. Table 1 shows how design and process validation fit together.

Development trials may be part of the design validation, especially for reliability and when defining certain aspects of the performance specification. This is the time to finish writing the validation master plan and to perform the FMEA.

Development samples are used to check performance and set up assembly processes. These differ from the previous prototypes in that molded components are available and allow more realistic trials to be performed.

A pre-production batch is assembled from "final" components by operators under the guidance of engineers. Verification tests at this stage include the effects of natural variation from component tolerances and differences between operators. The batch is also used to complete design validation tests. After testing, the units often are used as demonstration samples, but they should not be used by customers as they were not produced using a fully validated process.

Later pre-production and pilot production batches can be sold and used by customers. Design validation is complete by this stage, but process validation is still underway. Final release tests may be needed if the units go to customers.

Components and Parameters

The various types of components and manufacturing processes must be treated differently at each stage of development.

Different numbers of units are needed at each stage for a typical, assembled product. A simpler product like a skin patch could have many more samples at each stage. During early development there will be many components and sub-assemblies but often only one complete unit.

It typically takes several months before injection-molded parts are available in volume from a validated process. Hand-crafted samples or computer-made prototypes can be used in the beginning, followed by initial samples from a tool.

Mechanical parts include the clips, springs, and fixtures that are either custom built or standard, off-the-shelf items. Many of them do not change during design and development, and the initial parts are "substantially equivalent" to the final design.

Most electronic designs have standard components on a printed circuit board. As with mechanical parts, the initial design is often similar to the final version.

Software embedded in a product may be simple to change, but verification tests are long and need to be repeated each time, since changes to complex logic paths can have unexpected consequences.

Engineers who assemble the first prototypes know what they are doing, but the variation that arises from operators following their instructions can contribute to poor performance and must be considered in the design validation.

In the early stages, the product is tested by engineers using laboratory equipment. Units used in design verification trials should be tested using production equipment, especially if the equipment is used to adjust or calibrate the product and can affect performance. The equipment itself does not need to be validated at this stage. Typically, process engineers will not validate the tests until the design is complete and they have sufficient units to test.

Final release tests are needed if there is insufficient data to show that the standard production tests can guarantee the performance of the product. This occurs when the standard tests measure a feature of the product, such as a running speed, pressure, or force, that predicts the amount of drug delivered, the delivery time, or other important factors. Development trials may have demonstrated a strong relationship, but variation in the product or the test may weaken the link. These tests show the form of the relationship and protect customers who receive early production units. As with production tests, this is not part of design validation, but it does show how design and process cannot always be cleanly separated.

George R Bandurek, Ph.D., is principal of GRB Solutions Ltd, 9 Cissbury Road, Worthing, West Sussex BN14 9LD, England, 44.1903.215175, george@grb.co.uk.