Client/server data systems are proliferating in regulated laboratories and manage large amounts of critical data. It is obvious
that the operation and qualification of the network infrastructure must be an integral part of a company's validation strategy.
By their nature, networks are heterogeneous and comprise a variety of hardware components employing diverse communication
protocols. A change to a network component can affect other components and applications. Also, FDA is taking a closer look
at networks and has been citing companies for violations (see www.fdawarningletter.com).
Figure 1: Network topology of a distributed, networked chromatography data system
It has become clear that FDA is aware of CDS issues when operating within a network. A warning letter from FDA deals with
network programs with functions of a laboratory management system. The letter states:
"The network program lacked adequate validation and/or documentation controls. For example:
- System design documentation has not been maintained or updated throughout the software dating back to 1985 despite significant
changes and modification that have taken place. These include program code, functional/structural design, diagrams, specifications,
and text descriptions of other programs that interfere with [this program].
- Validation documentation failed to include complete and updated design documentation, and complete wiring/network diagrams
to identify all computers and devices connected to the system.
- The Quality Control Unit failed to ensure that adequate procedures were put in place to define and control computerized production
operations, equipment qualifications, document review and laboratory operations.
- The software validation documentation failed to adequately define, update, and control significant elements customized to
configure the system for specific needs of operation."7
It is clear from letters like this that compliance issues extend not only to the CDS itself but also to the network infrastructure
within which it operates. Many IT departments have applications to monitor the configuration, health, and status of the network
from a network operations center (NOC). These applications may extend from local workgroups throughout the entire enterprise.
Many of these applications provide graphical presentations of the network and are able to store configurations from a point
in time so that changes to the network may be monitored and documented as part of an overall network validation plan.
Additionally, personnel who are not GxP trained may have access to the network as part of their normal business responsibilities.
It is a paradox that the network infrastructure must be compliant, but many components (cabling, utilities, and other devices)
do not have validation plans. Networks require frequent changes, additions, and repairs, but can never be taken out of service.
A risk assessment, in combination with a sound risk management plan, can help address these problems.
A striking example of a computer network infrastructure failure made headlines in April 2003. A recently installed laboratory
computer system in a medical center became overloaded, resulting in a severe backlog of blood-testing samples.8
In such situations, several questions have to be asked:
- Did formal requirements include specifications for the anticipated load of the system?
- Was the system installed according to the supplier's specifications?
- Did it pass the test suite defined for the installation qualification (IQ) and the operational qualification (OQ)?
- Did performance qualification tests simulate the anticipated load of the networked system in terms of number of samples and
number of concurrent users in the context of the hospital's office and laboratory network?
- What measures were in place for the prevention and early detection of severe failures and performance bottlenecks?
- Could the bottleneck have been prevented through the use of a network monitoring system?
But where to start and, more importantly, where to stop? How much qualification, documentation, and testing is enough? The
following points provide a good guide:
"It is not possible to test every possible branch point in operating, network, and application software used in a typical
However, we can determine the level of quality of a subset of the software very accurately, by thoroughly testing the subset.
If rigorous and consistent development standards and methods are used, it has been observed that the quality level of the
subset is representative of the quality level of the entire software system."9
However, unexpected side effects frequently occur as components of a complex environment are changed:
"Validating networked systems not only requires qualifying individual networked components (for example, applications running
on each computer), but it also means qualifying authorized access to the networked system and qualifying the data transfer
between related computers, as in qualifying the interfaces of components at both sites. The whole system (i.e., including
the network) is validated by running typical day-by-day applications under normal and high load conditions and verifying correct
functions and performance with a previously specified criteria."10
Proper network administration and operation is an area that is subject to questioning by regulatory organizations. With this
in mind, it is important to capture a snapshot of the network during validation. Whenever a change is made, this snapshot
can be compared to the current configuration to insure proper communication among the various nodes on the network. In addition,
a retrospective document can be maintained that tracks these changes over time. Network qualification is the next frontier
in computer systems validation. The Part 11 guidance helps focus qualification activities by basing them on documented risk
assessment. Qualification of network infrastructure should focus on the following tasks:
- Design qualification (DQ): evidence that the network is suitable for the applications (the design is fit for the intended
- Installation qualification (IQ): verification and documentation of the static network topology and inventory (evidence that
the implementation matches the design)
- Operation qualification (OQ): dynamic topology verification and capacity testing (evidence that the system operates properly
according to the vendor specifications)
- Performance qualification (PQ): maintain the qualification status, ensure continuous performance through ongoing monitoring
during use and measurement of performance over time, and minimize the risk of failure during operation.