Power for Tests of Interaction in Epidemiology
Author Information
Author(s): Marshall Stephen W
Primary Institution: University of North Carolina at Chapel Hill
Hypothesis
How often does raising the Type I error rate for interaction tests result in a useful gain in power?
Conclusion
Raising the Type I error rate did not usefully improve the power for tests of interaction in many of the scenarios studied.
Supporting Evidence
- Raising the Type I error rate resulted in a useful power gain in only 7 of the 27 scenarios studied.
- In 30% of scenarios, power was already adequate at the 5% Type I error rate.
- In 44% of scenarios, raising the Type I error rate did not boost power to an acceptable level.
Takeaway
Sometimes, scientists try to make their tests more powerful by allowing more mistakes, but this study shows that it doesn't always help.
Methodology
Power was computed for various tests of interaction across different study sizes and types of interaction.
Potential Biases
The study may not generalize to other types of studies or exposure scenarios.
Limitations
The study assumed no confounding, no missing data, and focused only on two binary exposures.
Participant Demographics
The study involved case-control studies with varying sizes: 75 cases & 150 controls, 300 cases & 600 controls, and 1200 cases & 2400 controls.
Digital Object Identifier (DOI)
Want to read the original?
Access the complete publication on the publisher's website