Section 7.2: Basics of Hypothesis Testing
- Specify the types of decision errors that can occur when performing a hypothesis test, and give their standard names.
- Relate the probability of a Type I error to the size of the rejection region / significance level \(\alpha\) of a hypothesis test.
- Define the power of a hypothesis test when a particular alternative hypothesis is assumed true.
- Relate the probability of a Type II error to the power of the hypothesis test.
- Explain the analogy between the default balance between Type I and Type II errors in hypothesis testing and the presumption of innocence in the US criminal justice system.
- Determine which of the two error types we control in our construction of the rejection region of a hypothesis test.
- Given a test statistic and the sampling distribution of the test statistic when the null hypothesis is true, determine the \(P\)-value corresponding to the observed value of the test statistic.
- Give the correct interpretation of a \(P\)-value as the probability of observing a test statistic at least as extreme as the observed test statistic when the null hypothesis is true.
- Explain what small and large \(P\)-values indicate.
- Given a \(P\)-value for an observed test statistic and the desired size of the rejection region / significance level, determine whether the null hypothesis should be rejected.
- State what is commonly meant by a \(P\)-value indicating ‘statistical significance.’
- State what, at the end of the day, is statistically significant.
- Compare and contrast practical significance and statistical significance.
- Relate the conclusion of a hypothesis test back to the original claim, and state the conclusion in plain English that a non-statistician could understand.