Engineering Probability Class 24 Thu 2019-04-11

2   Material from text

2.1   Chapter 7, p 359, Sums of Random Variables

The long term goal of this section is to summarize information from a large group of random variables. E.g., the mean is one way. We will start with that, and go farther.

The next step is to infer the true mean of a large set of variables from a small sample.

2.2   Sums of random variables ctd

  1. Let Z=X+Y.

  2. $f_Z$ is convolution of $f_X$ and $f_Y$: $$f_Z(z) = (f_X * f_Y)(z)$$ $$f_Z(z) = \int f_X(x) f_Y(z-x) dx$$

  3. Characteristic functions are useful. They are covered in Section 4.7.1 on page 184.

    $$\Phi_X(\omega) = E[e^{j\omega X} ]$$

  4. $\Phi_Z = \Phi_X \Phi_Y$.

  5. This extends to the sum of n random variables: if $Z=\sum_i X_i$ then $\Phi_Z (\omega) = \Pi_i \Phi_{X_i} (\omega)$

  6. E.g. Exponential with $\lambda=1$: $\Phi_1(\omega) = 1/(1-j\omega)$ (page 164).

  7. Sum of m exponentials has $\Phi(\omega)= 1/{(1-j\omega)}^m$. That's called an m-Erlang.

  8. Example 2: sum of n iid Bernoullis. Probability generating function is more useful for discrete random variables.

  9. Example 3: sum of n iid Gaussians. $$\Phi_{X_1} = e^{j\mu\omega - \frac{1}{2} \sigma^2 \omega^2}$$ $$\Phi_{Z} = e^{jn\mu\omega - \frac{1}{2}n \sigma^2 \omega^2}$$ I.e., mean and variance sum.

  10. As the number increases, no matter what distribution the initial random variance is (provided that its moments are finite), for the sum $\Phi$ starts looking like a Gaussian.

  11. The mean $M_n$ of n random variables is itself a random variable.

  12. As $n\rightarrow\infty$ $M_n \rightarrow \mu$.

  13. That's a law of large numbers (LLN).

  14. $E[ M_n ] = \mu$. It's an unbiased estimator.

  15. $VAR[ M_n ] = n \sigma ^2$

  16. Weak law of large numbers $$\forall \epsilon >0 \lim_{n\rightarrow\infty} P[ |M_n-\mu| < \epsilon] = 1$$

  17. How fast does it happen? We can use Chebyshev, though that is very conservative.

  18. Strong law of large numbers $$P [ \lim _ {n\rightarrow\infty} M_n = \mu ] =1$$

  19. As $n\rightarrow\infty$, $F_{M_n}$ becomes Gaussian. That's the Central Limit Theorem (CLT).

3   Counterintuitive things in statistics

Statistics has some surprising examples, which would appear to be impossible. Here are some.

  1. Average income can increase faster in a whole country than in any part of the country.

    1. Consider a country with two parts: east and west.
    2. Each part has 100 people.
    3. Each person in the west makes \$100 per year; each person in the east \$200.
    4. The total income in the west is \$10K, in the east \$20K, and in the whole country \$30K.
    5. The average income in the west is \$100, in the east \$200, and in the whole country \$150.
    6. Assume that next year nothing changes except that one westerner moves east and gets an average eastern job, so he now makes \$200 instead of \$100.
    7. The west now has 99 people @ \$100; its average income didn't change.
    8. The east now has 101 people @ \$200; its average income didn't change.
    9. The whole country's income is \$30100 for an average of \$150.50; that went up.
  2. College acceptance rate surprise.

    1. Imagine that we have two groups of people: Albanians and Bostonians.

    2. They're applying to two programs at the university: Engineering and Humanities.

    3. Here are the numbers. The fractions are accepted/applied.

      city-major Engin Human Total
      Albanians 11/15 2/5 13/20
      Bostonians 4/5 7/15 11/20
      Total 15/20 9/20 24/40

      E.g, 15 Albanians applied to Engin; 11 were accepted.

    4. Note that in Engineering, a smaller fraction of Albanian applicants were accepted than Bostonian applicants. (corrected)

    5. Ditto in Humanities.

    6. However in all, a larger fraction of Albanian applicants were accepted than Bostonian applicants.

  3. I could go on.

4   Relevant Xkcd comics

  1. Meteorologist
  2. Significant
  3. P-Values
  4. Correlation
  5. Linear Regression
  6. Cell Phones
  7. Frequentists vs. Bayesians
  8. Seashell
  9. Conditional Risk
  10. Null Hypothesis

4.1   Chapter 8, Statistics

  1. We have a population. (E.g., voters in next election, who will vote Democrat or Republican).

  2. We don't know the population mean. (E.g., fraction of voters who will vote Democrat).

  3. We take several samples (observations). From them we want to estimate the population mean and standard deviation. (Ask 1000 potential voters; 520 say they will vote Democrat. Sample mean is .52)

  4. We want error bounds on our estimates. (.52 plus or minus .04, 95 times out of 100)

  5. Another application: testing whether 2 populations have the same mean. (Is this batch of Guiness as good as the last one?)

  6. Observations cost money, so we want to do as few as possible.

  7. This gets beyond this course, but the biggest problems may be non-math ones. E.g., how do you pick a random likely voter? In the past phone books were used. In a famous 1936 Presidential poll, that biased against poor people, who voted for Roosevelt.

  8. In probability, we know the parameters (e.g., mean and standard deviation) of a distribution and use them to compute the probability of some event.

    E.g., if we toss a fair coin 4 times what's the probability of exactly 4 heads? Answer: 1/16.

  9. In statistics we do not know all the parameters, though we usually know that type the distribution is, e.g., normal. (We often know the standard deviation.)

    1. We make observations about some members of the distribution, i.e., draw some samples.
    2. From them we estimate the unknown parameters.
    3. We often also compute a confidence interval on that estimate.
    4. E.g., we toss an unknown coin 100 times and see 60 heads. A good estimate for the probability of that coin coming up heads is 0.6.
  10. Some estimators are better than others, though that gets beyond this course.

    1. Suppose I want to estimate the average height of an RPI student by measuring the heights of N random students.
    2. The mean of the highest and lowest heights of my N students would converge to the population mean as N increased.
    3. However the median of my sample would converge faster. Technically, the variance of the sample median is smaller than the variance of the sample hi-lo mean.
    4. The mean of my whole sample would converge the fastest. Technically, the variance of the sample mean is smaller than the variance of any other estimator of the population mean. That's why we use it.
    5. However perhaps the population's distribution is not normal. Then one of the other estimators might be better. It would be more robust.
  11. (Enrichment) How to tell if the population is normal? We can do various plots of the observations and look. We can compute the probability that the observations would be this uneven if the population were normal.

  12. An estimator may be biased. We have an distribution that is U[0,b] for unknown b. We take a sample. The max of the sample has a mean n/(n+1)b though it converges to b as n increases.

  13. Example 8.2, page 413: One-tailed probability. This is the probability that the mean of our sample is at least so far above the population mean. $$\alpha = P[\overline{X_n}-\mu > c] = Q\left( \frac{c}{\sigma_x / \sqrt{n} } \right)$$ Q is defined on page 169: $$Q(x) = \int_x^ { \infty} \frac{1}{\sqrt{2\pi} } e^{-\frac{x^2}{2} } dx$$

  14. Application: You sample n=100 students' verbal SAT scores, and see $ \overline{X} = 550$. You know that $\sigma=100$. If $\mu = 525$, what is the probability that $\overline{X_n} > 550$ ?

    Answer: Q(2.5) = 0.006

  15. This means that if we take 1000 random sample of students, each with 100 students, and measure each sample's mean, then, on average, 6 of those 1000 samples will have a mean over 550.

  16. This is often worded as the probability of the population's mean being under 525 is 0.006, which is different. The problem with saying that is that presumes some probability distribution for the population mean.

  17. The formula also works for the other tail, computing the probability that our sample mean is at least so far below the population mean.

  18. The 2-tail probability is the probability that our sample mean is at least this far away from the sample mean in either direction. It is twice the 1-tail probability.

  19. All this also works when you know the probability and want to know c, the cutoff.

4.2   Hypothesis testing

  1. Say we want to test whether the average height of an RPI student (called the population) is 2m.
  2. We assume that the distribution is Gaussian (normal) and that the standard deviation of heights is, say, 0.2m.
  3. However we don't know the mean.
  4. We do an experiment and measure the heights of n=100 random students. Their mean height is, say, 1.9m.
  5. The question on the table is, is the population mean 2m?
  6. This is different from the earlier question that we analyzed, which was this: What is the most likely population mean? (Answer: 1.9m.)
  7. Now we have a hypothesis (that the population mean is 2m) that we're testing.
  8. The standard way that this is handled is as follows.
  9. Define a null hypothesis, called H0, that the population mean is 2m.
  10. Define an alternate hypothesis, called HA, that the population mean is not 2m.
  11. Note that we observed our sample mean to be $0.5 \sigma$ below the population mean, if H0 is true.
  12. Each time we rerun the experiment (measure 100 students) we'll observe a different number.
  13. We compute the probability that, if H0 is true, our sample mean would be this far from 2m.
  14. Depending on what our underlying model of students is, we might use a 1-tail or a 2-tail probability.
  15. Perhaps we think that the population mean might be less than 2m but it's not going to be more. Then a 1-tail distribution makes sense.
  16. That is, our assumptions affect the results.
  17. The probability is Q(5), which is very small.
  18. Therefore we reject H0 and accept HA.
  19. We make a type-1 error if we reject H0 and it was really true. See http://en.wikipedia.org/wiki/Type_I_and_type_II_errors
  20. We make a type-2 error if we accept H0 and it was really false.
  21. These two errors trade off: by reducing the probability of one we increase the probability of the other, for a given sample size.
  22. E.g. in a criminal trial we prefer that a guilty person go free to having an innocent person convicted.
  23. Rejecting H0 says nothing about what the population mean really is, just that it's not likely 2m.
  24. (Enrichment) Random sampling is hard. The US government got it wrong here:
    http://politics.slashdot.org/story/11/05/13/2249256/Algorithm-Glitch-Voids-Outcome-of-US-Green-Card-Lottery