Characterizing Biologics Using Dynamic Imaging Particle Analysis

Published on: 
BioPharm International, BioPharm International-08-02-2011, Volume 2011 Supplement, Issue 5

Overcoming limitations of volumetric techniques and detecting transparent particles.

The characterization of particulates in biologics is a relatively new concern that presents unique challenges, compared with the characterization of particulates in classical (i.e., nonbiologic) injectable drug formulations. Dynamic imaging particle analysis represents an exciting new and increasingly popular method for the characterization of particulates in biologics. However, as with any other type of instrumentation, it is important to understand how dynamic-imaging instruments measure things and what factors affect the measurements to properly interpret the results. This article will discuss three of the primary factors to understand when using dynamic imaging particle analysis as a particle-characterization technique for biologics: resolution, thresholding, and image quality.

ALL FIGURES ARE COURTESY OF THE AUTHOR

Characterization of subvisible particulates in parenterals has been a concern since 1936 and was formally addressed by US Pharmacopeia <788> in 1975 (1). At the time of its implementation, <788> was primarily concerned with foreign matter, such as rubber stopper pieces. The concern was largely mechanical in nature, because these were hard particles that might not be distributed through the blood system easily (2). Although biologics raise the same concerns, they also are subject to protein aggregation, whereby small particles combine to create larger ones. Aggregated proteins are soft particles, so they may be able to pass through restrictions that block hard particles. Aggregated proteins may be more difficult to detect than opaque particles because they are transparent. Although light-obscuration devices specified by <788> detect hard, opaque particles well, they do not always detect or properly characterize soft, transparent particles. This deficiency is well documented (2).

Further complicating the characterization of biologics is the fact that the aggregates can be amorphous and range from strandlike to circular shapes. Because light-obscuration devices calculate size based on an assumption of spherical particles, the size measurements can be highly inaccurate. Finally, because these biologics frequently will be delivered through prefilled syringes, the presence of silicone droplets also can cause inflated particle counts and an overall mischaracterization of the biologic.

Recognizing the above cited limitations of light-obscuration techniques, researchers have begun to look for alternative methods for characterizing subvisible particulates in biologics. One area that has shown great promise is in dynamic imaging particle analysis (2). Dynamic imaging particle analysis systems capture digital microscope images of particles in the biologic as they pass through a flow cell. Each particle captured can be measured using standard image-analysis algorithms. Unlike light-obscuration systems, imaging systems can make various measurements, both morphological and spectral, on each particle, even when the particle is transparent. With all these different measurements available for each particle, one can achieve a detailed description that includes particle shape. This description enables automatic differentiation between particle types, such as protein aggregates and silicone droplets. A block diagram of a typical system is shown in Figure 1.

Figure 1: Block diagram of FlowCAM (Fluid Imaging Technologies) dynamic imaging particle-analysis system. LED is light-emitting diode.

To demonstrate how dynamic imaging particle analysis can characterize subvisible particulates in biologics, the author analyzed a sample protein-based therapeutic using the FlowCAM (Fluid Imaging Technologies) particle imaging system. Figure 2 shows the result of the analysis, including summary statistics and representative particle images. This particular sample contained a large amount of silicone droplets. To enable the system to differentiate between various types of particles automatically, one can build a digital filter to distinguish the silicone droplets from other particulates. The filter is created by selecting a few examples of silicone droplets and instructing the software to find similar particles. The system uses a sophisticated method known as statistical pattern recognition to classify every particle automatically as being one of the class (i.e., silicone droplets) or not (3).

Figure 2: Results of a FlowCAM run with protein-based biologic sample, including (left) graphs and summary statistics and (right) representative particle images including, protein aggregates and silicone droplets.

In this example, out of the 2844 original particles in the run, 1788 were identified as silicone droplets. Because USP <788> addresses particles larger than 10 µm and larger than 25 µm, the particles were sorted into two categories (see Table I). One can see a dramatic difference between the particle counts for the two categories once the silicone droplets have been removed. If the silicone droplets are considered to be a harmless byproduct of the delivery method, then excluding them from the analysis will yield different results than would be found using light obscuration according to <788>.

Table I: Number of particles found in each size category before and after silicone droplets were removed.

The first factor to consider when using dynamic imaging particle analysis is resolution. Unlike the human eye, which sees the world as continuous, digital images are sampled versions of the real world. This means that any given image is divided into discrete picture elements or pixels that form the whole. Thus, the imaging sensor has a discrete number of samples of the object being captured. The easiest way to visualize this is to project the sensor geometry onto the object being captured, as shown in Figure 3. In this diagram, the overall system magnification is 200%, so that the object's image is two times larger than the actual object. A more meaningful way to express this is in terms of a calibration factor equal to the size of one pixel of the sensor projected onto the object. In this case the calibration factor would be 2.5 µm/pixel. Therefore, one pixel of the image covers 2.5 µm in length or width. Since a pixel is the smallest unit of a digital image, this means that, in theory, the system can measure down to 2.5 µm.

Figure 3: Conceptual diagram of object and image space in an optical system.

Advertisement

However, the system cannot really measure to that small a level because of sampling. The Nyquist–Shannon sampling theorem states that in the conversion of a continuous signal to a discrete signal, one can only resolve the original signal by sampling at two times the rate of the original sample. To use the previous example, one can only resolve a 2.5 µm signal by sampling at a minimum rate of 1.25 µm. Using the same example, with a 2.5-µm minimum pixel size, the smallest object the system can possibly resolve is 5 µm in size. In terms of particle analysis, the term "resolve" would apply only to counting a particle, and not to any higher-level description of the particle, such as shape. Indeed, the higher the order of the measurement desired, the more resolution is needed (3).

As a result of diffraction limitations inherent to microscopy-based systems and sampling theory as discussed above, dynamic particle imaging systems are limited to counting particles no smaller than 1 µm in size, and being able to differentiate shape information for particles no smaller than 2–3 µm in size (4). To image particles smaller than these limits requires electron microscopy, where the sample size and number of particles that can be imaged are extremely limited, and therefore statistically significant numbers of particles cannot be achieved. Other synthetic image techniques, in which an image is produced by sampling another characteristic (e.g., Brownian motion, or atomic force microscopy), can also be used for smaller particles, but these are not direct optical images.

The second factor to consider when using dynamic imaging particle analysis is the effect of thresholding. Not only are digital images quantized in the spatial domain in terms of a limited number of pixels, but each pixel is quantized in terms of gray-scale (or color in the case of color imaging) resolution, also. In most typical systems, the gray-scale (i.e., intensity) value of each pixel is limited to 256 levels or eight bits. In a color system, there are eight bits each of red, green, and blue. To make rapid measurement calculations on the image data in particle image analysis, the gray-scale intensity is reduced to a single bit (i.e., on or off) through a thresholding process. In imaging particle analysis, this thresholding is performed by comparing each pixel of an incoming image that may contain particles with the same pixel of a background image taken when no particles were present in the system. Because most systems are "bright-field" or backlit, a particle in the optical path will reduce the amount of light passing through to the camera sensor, therefore the incoming pixel intensity will be darker (i.e., a smaller number) than the background calibrated value for the same pixel. For this reason, most imaging particle analysis systems will define a threshold as either a delta value or percent value darker than the background.

For opaque particles, thresholding in this fashion (i.e., in which pixels are darker than the background) works quite well. However, as previously mentioned, protein aggregates are semitransparent and amorphous. In fact, because of the way light is bent through the structure of the aggregate, many of the pixels within the aggregate will be brighter than the background. Therefore, if only a dark threshold is used, the result will be that a single aggregate will be chopped up into multiple small particles by the thresholding process. This technique results in a severe overcounting of small particles and undercounting of large particles (5).

To avoid this problem, the imaging particle analysis system should allow for thresholding based on pixels that are either darker or lighter than the background. While this solution is certainly helpful, it still may allow the thresholding process to form some image artifacts. Fortunately, common image-processing algorithms are available to overcome this problem by using neighborhood analysis to group disparate clusters of thresholded pixels into logical whole images. Figure 4 shows the differences obtained for a thresholded image of a single protein aggregate image using a dark-only pixel threshold, a dark-and-light pixel threshold, and finally, a dark-and-light pixel threshold plus neighborhood analysis.

Figure 4: The effect of varying threshold on the binarization of a protein-aggregate image. Red pixels are particles, based on the threshold. The green boxes show enclosed particles found after thresholding. Image 1 is the original protein aggregate image. Image 2 shows the result of a threshold of 25 darker than background. Image 3 shows the result of a threshold of 15 darker. Image 4 shows the result of a threshold of 15 lighter. Image 5 shows the result of thresholding 15 lighter and darker. Image 6 is the same as Image 5, but with the addition of neighborhood analysis.

The third factor to consider when using dynamic imaging particle analysis is the effect of image quality. Although the overall topic of image quality is quite broad and well beyond the scope of this article, a basic tenet of the subject is that image sharpness is the most critical measurement of image quality (6). Indeed, in imaging particle analysis, the sharpness of the particle images is directly proportional to the accuracy of the measurements obtained.

Figure 5: Thresholded images of National Institute of Standards and Technology traceable 10-µm spheres. Sphere images in sharp focus are on the left, and less sharp images are at right. The table shows variance in measured diameter based on different threshold values.

Figure 5 demonstrates this by showing images and measurements obtained in an imaging particle analysis system for National Institute of Standards and Technology traceable size bead standards. The images at left are beads in sharp focus, and the images at right are beads in less sharp focus. The variation in size and shape caused when the bead images are less sharp is easy to see, and Table II shows the variation in size measurements that would be obtained by the system with different thresholds. For 10-µm calibrated spheres, the variation in measurement for the blurry images is more than 12 µm for a difference of 100 in threshold value, whereas the variation in measurement for the sharp images is only 1.67 µm over the same threshold range. It is clear that image sharpness greatly affects the accuracy and precision of particle measurements.

Table II: Equivalent spherical diameters (ESD).

Because the filtering operations that automatically separate various particle types in imaging particle analysis are based on the image measurements, it seems to follow that less-than-sharp images with inaccurate measurements would not produce good-quality particle characterization. To demonstrate this idea with protein aggregates, the author analyzed a sample protein-based therapeutic with the FlowCAM using a deeper-than-normal flow cell to yield particle images of varying sharpness. Two image libraries were created for use with a statistical pattern-recognition algorithm. One contained eight aggregate images in sharp focus, and the other contained eight aggregate images in less sharp focus. Each library was then used to perform a statistical pattern match on the entire run of aggregate particles.

The results are shown in Figure 6. Sixty percent fewer aggregates were found using the less focused library images. In statistical filtering, this occurs because the library particles form a looser, more ambiguous cluster in the n-dimensional pattern-recognition space because of the higher variance in measurements (3). This cluster, in turn, yields lower statistical confidence in the identity of the library particles, and therefore a lower match rate.

Not only will the resulting match rate be lower for less sharp particle libraries, but the inaccuracy of the measurements can also lead to false positives. To summarize this discussion of image quality: Fuzzy Images = Fuzzy Measurements = Fuzzy Classifications.

Figure 6: Results of statistical pattern recognition with two libraries of various image quality. Results on left (152 particles/mL) used the sharp library particles, and results on the right (91 particles/mL) used the less sharp library particles.

For the characterization of protein aggregates and other particulates in biologics, volumetric techniques such as light obscuration, laser diffraction, and electrozone (Coulter) sensing have limited effectiveness because of the fact that each must assume that particles are spherical in shape. This assumption means that the techniques cannot distinguish between protein aggregates and other particulates, such as silicone droplets, found in biologics.

Imaging particle analysis can overcome the limitations of these volumetric techniques by measuring shape and gray-scale parameters of particles. It also has the benefit of detecting transparent particles, such as protein aggregates, where techniques such as light obscuration fail or mischaracterize the particles. However, as with any other method of particle analysis, it is important to understand the limitations and factors that affect particle measurements using this method. This article has shown that three factors must be kept in mind when evaluating the efficacy of an imaging particle analysis solution for characterizing biologics: resolution, thresholding, and image quality. Once these factors are understood, one can have higher confidence in one's interpretation of results based on this technique.

LEW BROWN is the director of marketing at Fluid Imaging Technologies, 65 Forest Falls Dr., Yarmouth, ME 04096, lew@fluidimaging.com.

REFERENCES

1. S. Aldrich, US Pharmacopeia Workshop on Particle Size: Particle Detection and Measurement (Rockville, MD, 2010).

2. J.F. Carpenter et al., J. Pharm. Sci. 98 (4), 1201–1205 (2009)

3. L. Brown, "Particle Image Understanding–A Primer," www.fluidimaging.com/resource-center-whitepapers.htm, accessed July 12, 2011.

4. L.Brown, "Imaging Particle Analysis: Resolution and Sampling Considerations," www.fluidimaging.com/resource-center-whitepapers.htm, accessed July 12, 2011.

5. J.S. Pedersen, Bio-Process International European Conference and Exposition (Nice, France, 2011).

6. L. Brown, "Imaging Particle Analysis: The Importance of Image Quality," www.fluidimaging.com/resource-center-whitepapers.htm, accessed July 12, 2011.