How can you be sure only authorized users are entering data into your system? Is your electronic signature yours alone? Are you sure operators can?t invalidate your data? Is your company in compliance with FDA data security regulations? The second article in our series on 21 CFR Part 11 will help you answer these questions.
Although the overall scope of 21 CFR Part 11 has been narrowed and FDA announced enforcement discretion for certain requirements, most technical controls mandated by the original rule remain unchanged. Limiting system access to authorized personnel continues to be a strong requirement for compliance with Part 11. The implementation decisions must be based on a documented risk analysis. Whether Part 11 applies to a certain system or record depends on the predicate rules (such as cGMP, GLP, and GCP) and business practices.
Table 1: Mandatory technical controls for Part 11 compliant systems and examples of resulting user requirementsThere are two aspects to limiting access to a computer system: physical and logical security. Physical security requires access controls to facilities, which are standard in regulated environments and accredited laboratory facilities. It is virtually impossible for an unauthorized individual to walk into a pharmaceutical manufacturing or drug development site. In contrast, logical security minimizes the chances of accessing data in a protected area and inspecting or manipulating records. Dedicated logical security mechanisms built into data management systems are required to minimize the possibilities of accidental or intentional misuse, error, or fraud. After 21 CFR Part 11 became effective in 1997, FDA started citing logical security violations as unacceptable risks to product quality:
"Failure to establish and maintain procedures to control all documents that are required by 21 CFR 820.40, and failure to use authority checks to ensure that only authorized individuals can use the system and alter records, as required by 21 CFR 11.10(g). For example, engineering drawings for manufacturing equipment and devices are stored in AutoCAD form on a desktop computer. The storage device was not protected from unauthorized access and modification of the drawings."1
Secure access to information systems is of utmost concern for IT personnel in charge of the implementation, administration, management, and maintenance of those systems. Modern operating systems support many security aspects but require knowledgeable and careful management and configuration. Without appropriate service packs, configuration settings, and carefully designed security and password policies, operating environments are vulnerable.
Generally, access security in most modern IT systems is governed through a user account management system. Every authorized user on the system is assigned an appropriate login — typically, a user name or ID and a password. System administrators assign appropriate login credentials. If a user ID is combined with a password only known to that individual, the combination is unique and can be used as the equivalent of a handwritten signature — provided the company has an appropriate policy and documented with FDA its decision to use electronic signatures in accordance with Part 11.
In most professionally managed IT environments today, policies and conventions, along with the built-in functionality of the operating systems, ensure the uniqueness of user ID and password combinations. Furthermore, operating systems such as Windows 2000 offer application programming interface (API) functions, allowing application programmers to directly leverage the user authentication and access security functions of the underlying infrastructure, rather than duplicating such mechanisms.
When implementing access security, the data management system ideally should provide the capability to reuse existing operating system security mechanisms. This prevents the multiplication of effort involved in administering users and their access rights within and outside the regulated environments. Even in analytical laboratories, corporations frequently need to manage distributed teams collaborating across sites, cities, or even continents.
Operating systems typically employ access control lists (ACL) or permissions to grant or prohibit access to specific records on a per-user basis. This ensures that users can modify their own records, but, at most, only read the records of other users.
Role-based and object-based security settings can be used to manage access and confidentiality of records. Role-based security schemes manage access rights of all application users based on their respective job roles, responsibilities and training. Examples of such roles are "chemist," "senior analyst," "study director," "technician," or "system manager."
Role-based security schemes are especially practical for structured environments where the distribution of work is strictly defined and most tasks are routine with predictable results, as is the case of manufacturing or quality control environments. A technician may be assigned to prepare instruments for analysis (for example, equilibrate a column or calibrate a detector), schedule sequence analyses on an instrument, and conduct a first-pass review of the data. A senior scientist may be assigned to develop new methods, implement custom calculations, and sign off on the second-pass review of the results.
Figure 1: Access security combined with automatic audit trail and electronic sign offObject-based security governs which users (or user groups) can access specific "objects" managed by the system. For instance, data from project A may be modifiable only by users from department A but readable by some users in department B. Role-based and object-based security schemes can then be combined with automated audit trail and electronic sign-off functions (see Figure 1). Configurable, role-based access security allows the organization to decide which tasks are permissible according to users' job roles. The tasks that require an electronic signature should be configurable according to the organization's policies and business practices, not dictated by the data system supplier.
As the records managed by a data system (raw data, results, and the meta-data that transforms the former into the latter) have intrinsic dependencies, it is impossible to control and manage them from outside the application. Therefore, a data system itself should manage the integrity and security of its records, using its internal knowledge of how the individual pieces are linked. Otherwise, the integrity of results and raw data have to be manually "emulated" by a knowledgeable system manager (for example, by relying on a file server and trying to set file permissions appropriately).
Solutions that are merely based on standard file server functionality can be very dependent on manual or semi-manual data organization and therefore more susceptible to human errors than integrated ones.
Password PoliciesSharing passwords is a common violation of Part 11 requirements. An FDA warning letter describes a case in which an employee's user ID and password were publicly posted for other employees to use to access the data management system.
"During the injection another employee who did not have the established user name or password was observed obtaining access to the Data Management system utilizing the posted user name and password. Three previous employees, who had terminated employment in 1997 and 1998, still had access to critical and limited Data Management System functions on March 18, 1999."2
Authentication and confidentiality of passwords are void when they are shared between individuals. So, how can passwords be kept secure? In sections 11.200 (Identification mechanisms and controls) and 11.300 (Controls for identification codes/passwords), 21 CFR Part 11 states the following requirements for identification mechanisms used for the execution of an electronic signature: The identification mechanism shall be "used only by their genuine owners" and needs to be "administered and executed to ensure that attempted use of an individual’s electronic signature by anyone other than its genuine owner requires collaboration of two or more individuals."
One common problem with secure passwords is that they can be hard to remember. If they were easy to remember, they could be guessed by another person or a password cracking program. In the early days of secure operating systems, system administrators worked out password policies. Sometimes the policies resulted in passwords that were so secure that ordinary users had to write them down in order to remember them. In practice, a trade-off is necessary to protect an individual's password from external access while remaining at least minimally convenient for the user.
Figure 2: Password policy settings in Windows 2000. The password policy defines rules for password strength and password aging.Modern operating systems support security and password policies. A security policy governs system behaviors such as what security events are captured in the operating system's event log and the guidelines for locking a user account as a result of invalid login attempts. A password policy establishes criteria for creating new passwords (see Figure 2). Managing these settings is typically a system administration task fulfilled by corporate IT departments. Unfortunately, many applications do not leverage the tremendously important work already done on the operating system level.
It is quite common for personnel to struggle to remember more than a dozen different accounts with separate logins and passwords for the various systems they must use: operating system, email, LIMS, CDS, ERP, requirements management system, software defect tracking database, and others. One might argue that centralizing system authentication is like putting all your eggs into one basket: if the login and password combination is compromised, it may be compromised for all systems. This is particularly true if the same login and password combination is used to access websites and web-based applications and if credentials are passed through a non-secure connection. However, many users will attempt to use the same login name and possibly the same (or similar) passwords on all their systems anyway. Few individuals will invent their own algorithms to generate unique passwords for every application and website they use.
Managing separate user accounts and passwords — one for the operating system and one for the data system — can make the maintenance of Part 11-compliant systems even more difficult. Solutions that directly tie into the operating system security scheme appear to be the most pragmatic.
In order to comply with section 11.10, it is necessary to implement access restrictions. Apparently, it is not sufficient to merely restrict system access to a group of individuals without differentiating their responsibilities or knowledge. Users could inadvertently modify system settings in a way that affects the integrity or security of the records. This is particularly true for system administration settings. It is clear that system administration functions should be subject to written policies or other behavioral controls and should only be assigned to a limited number of users. In comment 83 of the rule, FDA explains:
"System access control is a basic security function because system integrity may be impeached even if the electronic records themselves are not directly accessed. For example, someone could access a system and change password requirements or otherwise override important security measures, enabling individuals to alter electronic records or read information that they were not authorized to see."
Part 11 uses the term "authority check." This does not mean that a system administrator must assign the access rights individually to each user. Organizations do not have to embed a list of authorized signers in every record to perform authority checks. For example, a record may be linked to an authority code that identifies the title or organizational unit of people who may sign the record. Thus, employees who have that corresponding code, or belong to that unit, would be able to sign the record.3
Despite all the technical controls discussed so far, there is a potential loophole: A user could execute actions on electronic records using the credentials of another user, either accidentally or intentionally. This could occur when the first user inadvertently leaves his or her computer session "open" during an interruption of the current task. Measures to reduce the likelihood of someone repudiating an electronic signature "as not his or her own" are described in comment 124 of the rule.
"The agency believes that, in such situations, it is vital to have stringent controls in place to prevent the impersonation. Such controls include: (1) Requiring an individual to remain in close proximity to the workstation throughout the signing session; (2) use of automatic inactivity disconnect measures that would "de-log" the first individual if no entries or actions were taken within a fixed short timeframe; and (3) requiring that the single component needed for subsequent signings be known to, and usable only by, the authorized individual."3
Figure 3: Windows 2000 audit policy defining that unsuccessful login events be tracked in the Windows Event ViewerMeasures against impersonation should be stated in the specifications for data systems. State-of-the art implementations use a session-specific inactivity timeout in addition to the password-protected screensaver available in Windows. Session-specific timeouts will even support shared use of the same desktop computer by different users (a common model in shift-mode operations) because each session can run under the credentials of the individual user and timeout independently. This specific approach has been successfully used in implementations of Cerity for Pharmaceutical QA/QC. In this particular example, the "unlock session" function requires re-authentication of the original user who locked the session. The Windows security subsystem is used to perform this re-authentication, which means the operating system's security policy settings also apply to the unlock session screen. If someone tries to unlock the wrong session, they cause the same administrative alert as a failed login attempt. Figure 3 illustrates an example security policy setting in Windows 2000. Figure 4 shows how an invalid password entered in the login or session unlock screen triggers an appropriate audit event in the Windows 2000 security event viewer. Furthermore, if configured, a series of invalid login attempts can actually disable the account.
Figure 4: Security event displayed in Windows 2000 event viewer after an unsuccessful login attempt inot the Cerity applicationThe main steps that should be considered and evaluated to ensure access security in accordance with 21 CFR Part 11 are summarized below:
1. FDA. Compliance policy guide: 21 CFR Part 11; electronic records, electronic signatures (CPG 7153.17). [Revoked in
2003 Feb 25.]
2. F-D-C Reports. The Gold Sheet 33(7).
3. FDA. Code of Federal Regulations, Title 21, Part 11 electronic records; electronic signatures; final rule. Federal Register 1997; 62(54):13429-13466.