OR WAIT 15 SECS
Next-generation therapeutics and regulatory requirements create demand for complex, fit-for-purpose tests.
Bioassays for use in biologic drug development and final product release must be robust and meet an array of regulatory requirements. As the complexity of next-generation biopharmaceuticals increases, so does the complexity of the bioassays necessary for effective evaluation of their performance across a spectrum of activities. Collaboration is essential between drug developers, testing laboratories, and regulatory authorities to ensure that new bioassays consistently serve their intended purpose.
Regulators expect bioassays for any new biologic to reflect the mechanism of action (MOA) of the drug in question, have stability-indicating properties, and be validatable, as outlined in International Council for Harmonization (ICH) Q2(R1) (1) and United States Pharmacopeia (USP) General Chapter <1033> (2). “Assays need to be adequately robust, accurate, and precise to ensure consistency in batch manufacturing and to provide meaningful data to support stability claims,” asserts Sharon Young, scientific manager, analytical and formulation sciences with Thermo Fisher Scientific.
MOA is important because it uncovers important clues-sometimes the most important clue-as to what is happening in the patient in vivo. “While the MOA rarely mimics exactly what is happening in the patient, in the best-case scenario the bioassay will mimic what is relevant for the patient (i.e., what makes the drug efficient),” explains Ulrike Herbrand, scientific director, global in-vitro bioassays at Charles River Laboratories.
Developing bioassays intended to measure the potency of biologics that truly reflect the relevant MOAs involves selecting appropriate indicator cell lines and relevant readouts that measure expected cell responses, adds Weihong Wang, technology development manager, cell and molecular biology services at Eurofins Lancaster Laboratories. “A master and/or working cell bank should be created for indicator cell lines and be well-characterized in order to assure meaningful interpretation of assay results and avoid artifacts unrelated to drug effects,” she says.
In addition, a quantitative method is generally expected for potency measurement and results are usually expressed as percent relative potency when compared to a reference standard that is fully characterized, according to Wang. Sponsors should avoid using a surrogate assay unless they can provide sufficient justifications that address scientific rationale (MOA) and comparability to traditional assays that might be practically challenging/less robust.
A bioassay should also be able to determine the stability of a drug and for how long it will be stable. Stress tests are conducted over a prolonged time at moderate conditions, including the recommended storage conditions, and the results help determine how a drug should be stored (at what temperature and for how long), according to Herbrand. She also notes that accelerated stress testing at various stress conditions (e.g., temperature, oxidation, deamidation, shear-stress, repeated freeze-thaw cycles) is required to determine which types of alterations the chosen bioassay is capable of detecting by showing altered activity.
“Another key expectation for a bioassay is fulfillment of validation requirements by meeting the preset validation acceptance criteria, which is required for lot release and stability testing of the biologic at the end of clinical Phase II at the latest,” Herbrand observes.
Young adds that bioassays should employ statistical monitoring to detect any shifts in performance over time and to detect any changes in the reference standard potency in real time. The use of more advanced statistical tools such as computational simulation can raise questions from regulators because they may seem unnecessarily complex, but such approaches can provide a much larger data set to determine assay acceptance criteria (3), she cites. The use of animal models is discouraged, Young comments, but is sometimes required and is acceptable if reasonably justified.
Ensuring the bioassay is suitable for transfer to other labs is not exactly a key requirement from a regulatory perspective, but nevertheless, Herbrand says it is an important point to consider. “A bioassay that is ridiculously difficult for other labs to use can delay programs since it takes too long to establish the method in a GMP-compliant [quality control] QC lab,” she explains.
Reporter-gene assays are a category of surrogate assays, notes Wang, that have been used more often in the past several years, especially in place of assays that require primary cells where assay performance can be quite variable and lack robustness desired as a QC testing method. They are particularly becoming common in early development and are increasingly being accepted by regulatory authorities, adds Young.
As an example, Wang points to antibody-dependent cell-mediated cytotoxicity (ADCC) reporter-gene assays, which have been used to measure Fc receptor mediated response for monoclonal antibodies (mAbs) and in place of traditional peripheral blood mononuclear cell (PBMC)/primary natural killer (NK) cell-based cytotoxicity assays. “Instead of looking at direct cell killing (by primary NK cells), the assay measures activation of signal transduction associated with the cellular event,” she explains. The resulting reporter gene assay is easy to perform with much shorter assay duration and in general delivers excellent assay performance.
The key to successful use of reporter-gene assays, adds Young, is to have additional bioassays that really reflect the biological mechanism(s) of action and then show that results using the reporter-gene assay are comparable, or better, to those bioassays that measure the cellular response(s) directly. “Bridging results need to include a large number of lot release samples as well as stability and force-degraded samples. Bridging studies should also include isolated impurities in these comparisons to directly demonstrate that all potential changes in the molecule can be detected in the reporter-gene assay comparably, or better, than in the direct assay(s),” she observes.
Wang highlights that the increasing use of reporter-gene assays is a good example that mechanism-of-action is the first and foremost important characteristic, and when this feature is proved, solid method performance including robustness is another key element of a successful bioassay.
The top trends in bioassay development observed at Thermo Fisher Scientific include the implementation of ready-to-use cell banks and automation. “Thaw-and-use cell banks can be thawed and plated directly in the bioassay, eliminating the need to spend a large amount of resources on maintaining actively growing cultures. It also simplifies monitoring of assay performance because impact from the age (passage number) of the culture need not be tracked and can reduce method variability attributable to variations in culture conditions between testing occasions,” Young explains.
Laboratory/assay automation, from simple pipetting robots to end-to-end automation, is helping to increase throughput and reduce the risk of laboratory error, according to Herbrand. It also allows for miniaturization, which she says enables performance of six to seven tests simultaneously, affording higher throughput, the need for fewer reagents and lower cost.
For instance, the use of automated liquid handlers is becoming increasingly common in bioassays because it significantly reduces assay variability attributable to pipetting, Young says. “Automation also mitigates variability attributable to positional bias by enabling randomization of plate layouts within and between plates without concern for analyst error, which significantly mitigates plate position and sequence (order of addition of reagents) sources of variability,” she remarks.
With the fast growth of the gene- and cell-therapy pipeline, more potency assays for this category of product are being submitted to the regulatory agencies, according to Wang. Potency assays for cell- and gene-therapy products are generally more complicated than recombinant therapeutic proteins, as they need to demonstrate several characteristics of the product.
These products, according to Herbrand, require bioactivity testing, often in a multistep approach. For the first step, molecular methods such as a quantitative polymerase chain reaction (PCR) or droplet digital PCR are often the methods of choice to confirm the desired re-expression or suppression effect in vitro at the RNA level. In addition, she says the re-expression or suppression of a gene of interest can be shown on the protein level using enzyme-linked immunosorbent assay (ELISA) testing, or in the case of receptors, via flow cytometry. In some cases, a reporter-based method might be applicable as an MOA-reflecting, cell-based bioactivity assay to confirm the effect on the functional level.
“An assay for a gene therapy usually involves transfecting/infecting a cell line followed by measurement of the expression of the intended target gene at the nucleic acid or protein level. In addition, whenever possible, the function of the expressed protein should be confirmed, for example, by enzymatic assays, or receptor binding, etc., depending on the MOA. Therefore, often times several assays are needed to fully address all aspects of the product characteristics,” explains Wang.
Potency assays for cell therapy products also have their unique challenges. “First of all,” states Wang, “it may be more difficult to correlate in-vitro potency measurements to clinical efficacy compared to typical recombinant protein therapeutics.” Second, depending on the product, she notes that it is sometimes difficult to truly establish a “reference standard” to which subsequent lots may be compared. Third, performance of functional assays, in general, tends to be less robust.
Because of these issues, setting specifications can also be challenging, according to Wang. “Last but not least,” she observes, “some cell-therapy products require fast delivery to patients; therefore, the balance between thorough characterization and short assay turn-around time may need to be carefully considered.”
Because multiple assays may need to be deployed, from phenotypic (e.g., flow cytometry), to functional confirmation (e.g., cytokine release and/or cytotoxicity for chimeric antigen receptor T-cell therapies), these assays need to be developed as part of a ‘matrix approach’ in order to fully characterize the product and assure efficacy in order to gain regulatory acceptance,” Wang says.
A recent approach noted by Young that has gained approval is flow cytometry detection of biomarkers that correlate with function, such as cell survival or differentiation. “Analysis of the selected biomarkers must be able to detect unacceptable behavior of the cells, and not simply rely on cell identity markers that are unlikely to do so under conditions that affect cell function,” she notes. The rationale for selection of the biomarkers must be clear, particularly when a complex assay matrix is proposed, and the assay must be shown to be capable of rejecting a sub-par batch, she adds.
One of the main challenges, asserts Herbrand, is that there currently are no clear guidelines on how to evaluate bioactivity for cell-therapy and gene-therapy products. “For now,” she says, “these therapies are being considered on a case-by-case basis by regulators. Many of the gene therapy companies are currently running tests that more or less reflect what would be required for the development phase, but they haven’t nailed down what they need for QC release of a marketed drug.”
What isn’t in question is the complexity of cell and gene therapies. “What is understood is that the safest course to take is to make sure we reflect that the drug is doing its job on all relative levels. Twenty years ago, we pondered the same questions with mAbs and what was required for lot release. Today, at least we have imprecise guidelines,” Herbrand comments.
Many molecules have more than one mode of action, particularly bispecific antibodies, and regulators increasingly want a single assay that can reliably detect changes in potency through those multiple mechanisms, according to Young. “Combined approaches measuring the activities of both epitopes within the same assay rather than reporting two independent results for the activities of the individual arms of the bispecific have an excellent chance of being accepted by regulators,” Herbrand agrees. The reason: it is important that bioassays for potency of bispecific molecules reflect any additive or synergistic effects of the bispecific mechanisms of action.
“This need,” Young notes, “can create challenges in developing the method, particularly in cell-line selection, as well as in interpreting the results. Results for a bispecific with a synergistic effect will show changes in EC50 as well as in asymptotes, for which tests of parallelism are not appropriate. Differences in asymptotes is traditionally unacceptable per USP guidance, as sample curves must be similar to reference curves for calculation of relative potency.”
Currently, although the ideal assays from a regulatory and a sponsor’s standpoint would be dual reporter systems, it is still more common to independently test the bioactivity using two different assays to measure each arm of the bispecific antibody, Herbrand remarks. “While easier to develop, they do not allow one to fully see the synergistic effects and combined activity of the antibodies,” she notes.
Young observes that at the CASSS Bioassays 2016 conference, Bhavin Parekh presented an approach for calculating relative potency/efficacy for a bispecific product with synergistic activity that he developed based on the field of pharmacology (4). She thinks this simple approach to reach a discrete value to compare concomitant shifts in EC50 and asymptotes may be able to establish a precedent for regulatory acceptance.
Common in drug discovery, three-dimensional (3D) cell culture models have recently been attracting interest among biologics manufacturers for use in bioassay development as well. “The use of 3D cell cultures and 3D bioprinting has the potential to enable assays in more complex cellular environments, more like tissue structures,” Young explains. “Such models,” she adds, “are ideal for mimicking the complex cancer microenvironment as well as for culturing cell lines that rely extensively on the extracellular matrix environment for growth, differentiation, and function, such as skeletal muscle cells.”
“These 3D systems are often a better reflection of the in-vivo situation than what is observed for traditional two-dimensional cell-culture-based assays and provide better information regarding what is occurring in the patient,” agrees Herbrand. She does note, though, that it can be tedious to validate such tests, and they are often still time-consuming. Young also says that it can be challenging to control variability, find suitable methods of quantitation for potency results, and implement such complex techniques into a QC environment.
Protein tagging is based on peptide sequences that are attached to proteins to facilitate easy detection and purification of expressed proteins, according to Herbrand. They can also be used to identify potential binding partners for the protein of interest. The tags, which Herbrand notes are typically applied using CRISPR/Cas9 gene editing technology, are of particular interest because they are highly sensitive down to the endogenous level of the protein, often with over seven logs of dynamic range.
Because human primary cell assays are closer to the situation in patients, Herbrand suggests they may be more translatable than cell lines, which are immortalized/modified in their key characteristics and thus could exhibit modified cell behaviors. “However,” she cautions, “human primary cell-based assays are not without risks.” Human primary cells are subject to lot-to-lot variability and availability of suitable sourcing material. In addition, even if it was possible to identify the perfect lot, it wouldn’t last forever. “There are no guarantees you won’t have to adjust the assay. And even if you manage to get the assay running again, you have to revalidate it, at least partially, which requires a change in the regulatory filing and months of delays,” Herbrand explains.
Caution is warranted in general when it comes to newer bioassays, according to Young. “As cell culturing techniques advance and increasingly complex multifunctional molecules are developed in the biopharmaceutical industry, bioassay development will become increasingly complex as well,” she says.
One way the industry has responded, according to Wang, is to develop bioassays to support therapeutic biologics as early as possible, particularly when more complex products are involved. “While science continues to be the driving force for bioassay development, sponsors are encouraged to engage regulatory agencies early in order to obtain guidance on their assay development approach and avoid unnecessary surprises and delays in their product development programs,” she says.
Young adds, “As an industry, we are constantly trying to find a better model to assess potency of our molecules, but we must be cautious about the complexity of bioassays. Bioassays are already complex, which results in increased variability and statistical uncertainty and can be difficult to control between labs. We must be careful to avoid unnecessary challenges in method execution from over-complicating bioassays.”
1. ICH, Q2(R1) Validation of Analytical Procedures: Text and Methodology, Step 4 version (1995).
2. USP, USP General Chapter <1033>, “Validation of Biological Assays” (Rockville, MD, 2010).
3. M. Labant, Gen. Eng. News, 36(10), (2016).
4. B. Parekh, “Bioassay Development for Complex Molecules,” presentation at CASSS Bioassays (Silver Spring, MD, 2016).
Vol. 33, No. 5
When referring to this article, please cite it as C. Challener, "Building Better Bioassays," BioPharm International 33 (5) 2020.