Engineering Probability Class 27 Thurs 2018-04-26

1   Iclicker questions

  1. Experiment: toss two fair coins, one after the other. Observe two random variables:

    1. X is the number of heads.
    2. Y is the when the first head occurred, with 0 meaning both coins were tails.

    What is P[X=1]?

    1. 0
    2. 1/4
    3. 1/2
    4. 3/4
    5. 1
  2. What is P[Y=1]?

    1. 0
    2. 1/4
    3. 1/2
    4. 3/4
    5. 1
  3. What is P[Y=1 & X=1]?

    1. 0
    2. 1/4
    3. 1/2
    4. 3/4
    5. 1
  4. What is P[Y=1|X=1]?

    1. 0
    2. 1/4
    3. 1/2
    4. 3/4
    5. 1
  5. What is P[X=1|Y=1]?

    1. 0
    2. 1/4
    3. 1/2
    4. 3/4
    5. 1
  6. What's the MAP estimator for X given Y=2?

    1. 0
    2. 1

2   Material from text

2.1   Central limit theorem etc

  1. Review: Almost no matter what distribution the random variable X is, $F_{M_n}$ quickly becomes Gaussian as n increases. n=5 already gives a good approximation.
  2. nice applets:
    1. http://onlinestatbook.com/stat_sim/normal_approx/index.html This tests how good is the normal approximation to the binomial distribution.
    2. http://onlinestatbook.com/stat_sim/sampling_dist/index.html This lets you define a distribution, and take repeated samples of a given size. It shows how the means of the samples are distributed. For sample with more than a few observations, they look fairly normal.
    3. http://www.umd.umich.edu/casl/socsci/econ/StudyAids/JavaStat/CentralLimitTheorem.html This might also be interesting.
  3. Sample problems.
    1. Problem 7.1 on page 402.
    2. Problem 7.22.
    3. Problem 7.25.

2.2   Chapter 7, p 359, Sums of Random Variables

The long term goal of this section is to summarize information from a large group of random variables. E.g., the mean is one way. We will start with that, and go farther.

The next step is to infer the true mean of a large set of variables from a small sample.

2.3   Sums of random variables ctd

  1. Let Z=X+Y.
  2. $f_Z$ is convolution of $f_X$ and $f_Y$: $$f_Z(z) = (f_X * f_Y)(z)$$ $$f_Z(z) = \int f_X(x) f_Y(z-x) dx$$
  3. Characteristic functions are useful. $$\Phi_X(\omega) = E[e^{j\omega X} ]$$
  4. $\Phi_Z = \Phi_X \Phi_Y$.
  5. This extends to the sum of n random variables: if $Z=\sum_i X_i$ then $\Phi_Z (\omega) = \Pi_i \Phi_{X_i} (\omega)$
  6. E.g. Exponential with $\lambda=1$: $\Phi_1(\omega) = 1/(1-j\omega)$ (page 164).
  7. Sum of m exponentials has $\Phi(\omega)= 1/{(1-j\omega)}^m$. That's called an m-Erlang.
  8. Example 2: sum of n iid Bernoullis. Probability generating function is more useful for discrete random variables.
  9. Example 3: sum of n iid Gaussians. $$\Phi_{X_1} = e^{j\mu\omega - \frac{1}{2} \sigma^2 \omega^2}$$ $$\Phi_{Z} = e^{jn\mu\omega - \frac{1}{2}n \sigma^2 \omega^2}$$ I.e., mean and variance sum.
  10. As the number increases, no matter what distribution the initial random variance is (provided that its moments are finite), for the sum $\Phi$ starts looking like a Gaussian.
  11. The mean $M_n$ of n random variables is itself a random variable.
  12. As $n\rightarrow\infty$ $M_n \rightarrow \mu$.
  13. That's a law of large numbers (LLN).
  14. $E[ M_n ] = \mu$. It's an unbiased estimator.
  15. $VAR[ M_n ] = n \sigma ^2$
  16. Weak law of large numbers $$\forall \epsilon >0 \lim_{n\rightarrow\infty} P[ |M_n-\mu| < \epsilon] = 1$$
  17. How fast does it happen? We can use Chebyshev, though that is very conservative.
  18. Strong law of large numbers $$P [ \lim _ {n\rightarrow\infty} M_n = \mu ] =1$$
  19. As $n\rightarrow\infty$, $F_{M_n}$ becomes Gaussian. That's the Central Limit Theorem (CLT).

2.4   Chapter 8, Statistics

  1. We have a population. (E.g., voters in next election, who will vote Democrat or Republican).

  2. We don't know the population mean. (E.g., fraction of voters who will vote Democrat).

  3. We take several samples (observations). From them we want to estimate the population mean and standard deviation. (Ask 1000 potential voters; 520 say they will vote Democrat. Sample mean is .52)

  4. We want error bounds on our estimates. (.52 plus or minus .04, 95 times out of 100)

  5. Another application: testing whether 2 populations have the same mean. (Is this batch of Guiness as good as the last one?)

  6. Observations cost money, so we want to do as few as possible.

  7. This gets beyond this course, but the biggest problems may be non-math ones. E.g., how do you pick a random likely voter? In the past phone books were used. In a famous 1936 Presidential poll, that biased against poor people, who voted for Roosevelt.

  8. In probability, we know the parameters (e.g., mean and standard deviation) of a distribution and use them to compute the probability of some event.

    E.g., if we toss a fair coin 4 times what's the probability of exactly 4 heads? Answer: 1/16.

  9. In statistics we do not know all the parameters, though we usually know that type the distribution is, e.g., normal. (We often know the standard deviation.)

    1. We make observations about some members of the distribution, i.e., draw some samples.
    2. From them we estimate the unknown parameters.
    3. We often also compute a confidence interval on that estimate.
    4. E.g., we toss an unknown coin 100 times and see 60 heads. A good estimate for the probability of that coin coming up heads is 0.6.
  10. Some estimators are better than others, though that gets beyond this course.

    1. Suppose I want to estimate the average height of an RPI student by measuring the heights of N random students.
    2. The mean of the highest and lowest heights of my N students would converge to the population mean as N increased.
    3. However the median of my sample would converge faster. Technically, the variance of the sample median is smaller than the variance of the sample hi-lo mean.
    4. The mean of my whole sample would converge the fastest. Technically, the variance of the sample mean is smaller than the variance of any other estimator of the population mean. That's why we use it.
    5. However perhaps the population's distribution is not normal. Then one of the other estimators might be better. It would be more robust.
  11. (Enrichment) How to tell if the population is normal? We can do various plots of the observations and look. We can compute the probability that the observations would be this uneven if the population were normal.

  12. An estimator may be biased. We have an distribution that is U[0,b] for unknown b. We take a sample. The max of the sample has a mean n/(n+1)b though it converges to b as n increases.

  13. Example 8.2, page 413: One-tailed probability. This is the probability that the mean of our sample is at least so far above the population mean. $$\alpha = P[\overline{X_n}-\mu > c] = Q\left( \frac{c}{\sigma_x / \sqrt{n} } \right)$$ Q is defined on page 169: $$Q(x) = \int_x^ { \infty} \frac{1}{\sqrt{2\pi} } e^{-\frac{x^2}{2} } dx$$

  14. Application: You sample n=100 students' verbal SAT scores, and see $ \overline{X} = 550$. You know that $\sigma=100$. If $\mu = 525$, what is the probability that $\overline{X_n} > 550$ ?

    Answer: Q(2.5) = 0.006

  15. This means that if we take 1000 random sample of students, each with 100 students, and measure each sample's mean, then, on average, 6 of those 1000 samples will have a mean over 550.

  16. This is often worded as the probability of the population's mean being under 525 is 0.006, which is different. The problem with saying that is that presumes some probability distribution for the population mean.

  17. The formula also works for the other tail, computing the probability that our sample mean is at least so far below the population mean.

  18. The 2-tail probability is the probability that our sample mean is at least this far away from the sample mean in either direction. It is twice the 1-tail probability.

  19. All this also works when you know the probability and want to know c, the cutoff.