By modifying a single parameter and keeping all other parameters constant, you can infer how that parameter affects power.
1. What happens to power as sample size increases? Why?
2. What sample size do we need to achieve a power of 0.8 with a standard deviation of 1, a difference in means of 0.5, and an alpha of 0.05?
3. How does sample size change in the above scenario if we changed our alpha to 0.0001? Why?
4. What happens to the sample size necessary to achieve a 0.80 power if we switch to a one-tailed test? Why don't we always use a one-tailed test?
5. How does increasing the effect size affect the power of the test? Why is effect size an important factor?
6. How does changing the significance level (α) affect both the probability of a Type I error and the power of the test? What trade-offs are involved?