Chapter 7: Multivariate Distributions
- Explain how a random vector \(\mathbf{X}\) is a vector-valued function from a sample space to a subset of \(\mathbb{R}^{d}\).
- Recognize the notation I will use for vectors: either bold-faced Roman letters (in printed material) or underlined Roman letters (on the board).
- State the definition of a discrete random vector.
- Define the joint probability mass function \(p(x, y)\) for a bivariate random vector \((X, Y)\).
- Use the joint probability mass function \(p(x, y)\) to find the probability that \((X, Y)\) falls in some query set.
- State the two properties that a function \(p(x, y)\) must have to be a joint probability mass function.
- Define the joint cumulative distribution function \(F(x, y)\) for a bivariate random vector \((X, Y)\).
- Use the joint cumulative distribution function \(F(x, y)\) to find the probability that \((X, Y)\) falls in a rectangular query set.
- State the main properties of a joint cumulative distribution function \(F(x, y)\) for a bivariate random vector \((X, Y)\).
- Define the marginal probability mass functions \(p_{X}\) and \(p_{Y}\) for a bivariate random vector \((X, Y)\).
- Explain why marginal probability mass functions are called “marginal.”
- Compute marginal probability mass functions from a joint probability mass function.
- Compute marginal cumulative distribution functions from a joint cumulative distribution function.
- Define the expected value \(E[g(X, Y)]\) of a transformation \(g\) of a discrete random vector \((X, Y)\), and compute the expected value given a probability mass function for the random vector.
- State the main properties of expectations of transformations of discrete random vectors in terms of linearity and single-component expectations.
- Use a TI–30XS calculator (or equivalent) to quickly compute product expectations such as \(E[XY]\) given the probability mass function for a bivariate random vector \((X, Y)\).