multiple comparisons

Multiple comparison procedures are fundamental in experimental research. Dunnett’s test, which compares multiple treatments to a single control, is particularly common in laboratory studies. When multiple comparisons are made, proper statistical methods are essential to control false positives. This article demonstrates how the number of comparisons affects p-values in Dunnett’s test, implements the procedure in R, and discusses strategies to improve statistical power.

When it comes to confidence intervals and hypothesis testing there are two important limitations to keep in mind.

The significance level,1 \(\alpha\), or the confidence interval coverage, \(1 - \alpha\),

  1. only apply to one test or estimate, not to a series of tests or estimates.
  2. are only appropriate if the estimate or test was not suggested by the data.

Let’s illustrate both of these limitations via simulation using R.

Pairwise comparison means comparing all pairs of something. If I have three items, A, B and C, that means comparing A to B, A to C, and B to C. Given n items, I can determine the number of possible pairs using the binomial coefficient: $$ \frac{n!}{2!(n - 2)!} = \binom {n}{2}$$ Using the R statistical computing environment, we can use the choose() function to quickly calculate this.