The Power of Hypothesis - Although a powerful statistical method, hypothesis testing can lead to false conclusions if applied incorrectly. - BioPharm International

ADVERTISEMENT

The Power of Hypothesis
Although a powerful statistical method, hypothesis testing can lead to false conclusions if applied incorrectly.


BioPharm International
Volume 21, Issue 6

POWER OF A HYPOTHESIS

Most people use a p-value equal to or less than 0.05 as the criteria for rejecting the null. The probability of rejecting the null hypothesis when the null is true is called Type I error. The more critical error is the Type II error where you accept the null hypothesis when the null is actually false. Because most hypothesis testing in the biopharmaceutical industry sees its greatest use in comparing a previous lot to the new lot or comparing a sample to a known value, accepting the wrong answer can be detrimental. Luckily, there are ways to minimize the risks of making a wrong decision in hypothesis testing.

For example, increasing the sample size can minimize the risk. Alternatively, increase the difference needed to be considered statistically different will reduce the risk. The normal distribution can be used to get a rough estimate for the correct sample size. Software such as Minitab and JMPuse a noncentral t-distribution to calculate the sample size. The following equation gives the normal approximation to the sample size calculation:




in which n is the number of samples to be calculated, S is the sample standard deviation, Δ is the difference to detect, Zα is the Z-value for the α error (a two-sided 0.05 α would be 1.645) and Zβ is the Z-value for the β error (a two-sided 0.10 α would be 1.281).

The α risk is the probability of rejecting a good lot; this is sometimes called the producer risk. The β risk is the probability of accepting a bad lot; this is sometimes called the consumer risk.

RISKS OF HYPOTHESIS TESTING

If the sample size gets too large or the variability is too small, a hypothesis test might conclude a statistical difference, when the difference observed is not clinically relevant.

One proposed method that combines both the statistical rigor of hypothesis testing and the appropriateness of meaningful differences is to set a minimum difference that must be obtained to be considered different. The correct selection of the minimum difference value is still being debated in the statistical and scientific community.

SUMMARY

Although a powerful statistical method, hypothesis testing can lead to false conclusions if applied incorrectly. Whenever possible, use the t-distribution over the z-test and normal distribution because the population standard deviation is never known for sure. Using the correct sample size and power analysis can lead to robust comparisons, especially if a minimum difference is prespecified.

Steven Walfish is the president of Statistical Outsourcing Services, Olney, MD, 301.325.3129,


blog comments powered by Disqus

ADVERTISEMENT

ADVERTISEMENT

NIH Launches Human Safety Study of Ebola Vaccine Candidate
August 29, 2014
Suppliers Seek to Boost Single-Use Technology
August 21, 2014
Bristol-Myers Squibb and Celgene Collaborate on Immunotherapy and Chemotherapy Combination Regimen
August 20, 2014
FDA Warns about Fraudulent Ebola Treatments
August 15, 2014
USP Awards Analytical Research
August 15, 2014
Author Guidelines
Source: BioPharm International,
Click here