Section 7.2: Basics of Hypothesis Testing

  1. Specify the types of decision errors that can occur when performing a hypothesis test, and give their standard names.
  2. Relate the probability of a Type I error to the size of the rejection region / significance level \(\alpha\) of a hypothesis test.
  3. Define the power of a hypothesis test when a particular alternative hypothesis is assumed true.
  4. Relate the probability of a Type II error to the power of the hypothesis test.
  5. Explain the analogy between the default balance between Type I and Type II errors in hypothesis testing and the presumption of innocence in the US criminal justice system.
  6. Determine which of the two error types we control in our construction of the rejection region of a hypothesis test.
  7. Given a test statistic and the sampling distribution of the test statistic when the null hypothesis is true, determine the \(P\)-value corresponding to the observed value of the test statistic.
  8. Give the correct interpretation of a \(P\)-value as the probability of observing a test statistic at least as extreme as the observed test statistic when the null hypothesis is true.
  9. Explain what small and large \(P\)-values indicate.
  10. Given a \(P\)-value for an observed test statistic and the desired size of the rejection region / significance level, determine whether the null hypothesis should be rejected.
  11. State what is commonly meant by a \(P\)-value indicating ‘statistical significance.’
  12. State what, at the end of the day, is statistically significant.
  13. Compare and contrast practical significance and statistical significance.
  14. Relate the conclusion of a hypothesis test back to the original claim, and state the conclusion in plain English that a non-statistician could understand.