Chapter 7: Multivariate Distributions

  1. Explain how a random vector \(\mathbf{X}\) is a vector-valued function from a sample space to a subset of \(\mathbb{R}^{d}\).
  2. Recognize the notation I will use for vectors: either bold-faced Roman letters (in printed material) or underlined Roman letters (on the board).
  3. State the definition of a discrete random vector.
  4. Define the joint probability mass function \(p(x, y)\) for a bivariate random vector \((X, Y)\).
  5. Use the joint probability mass function \(p(x, y)\) to find the probability that \((X, Y)\) falls in some query set.
  6. State the two properties that a function \(p(x, y)\) must have to be a joint probability mass function.
  7. Define the joint cumulative distribution function \(F(x, y)\) for a bivariate random vector \((X, Y)\).
  8. Use the joint cumulative distribution function \(F(x, y)\) to find the probability that \((X, Y)\) falls in a rectangular query set.
  9. State the main properties of a joint cumulative distribution function \(F(x, y)\) for a bivariate random vector \((X, Y)\).
  10. Define the marginal probability mass functions \(p_{X}\) and \(p_{Y}\) for a bivariate random vector \((X, Y)\).
  11. Explain why marginal probability mass functions are called “marginal.”
  12. Compute marginal probability mass functions from a joint probability mass function.
  13. Compute marginal cumulative distribution functions from a joint cumulative distribution function.
  14. Define the expected value \(E[g(X, Y)]\) of a transformation \(g\) of a discrete random vector \((X, Y)\), and compute the expected value given a probability mass function for the random vector.
  15. State the main properties of expectations of transformations of discrete random vectors in terms of linearity and single-component expectations.
  16. Use a TI–30XS calculator (or equivalent) to quickly compute product expectations such as \(E[XY]\) given the probability mass function for a bivariate random vector \((X, Y)\).