Engineering Probability Class 26 Mon 2018-04-23

1   Grades

1.1   Computation

  1. This will accumulate the total score.

  2. Normalize each homework to 100 points.

    Homeworks that have not yet been graded (that's 9 and up) count for 0.

  3. Sum top 10, multiply result by 0.02, and add into total.

  4. Normalize each exam to 30 points.

  5. Add top 2 into total.

  6. Take the number of sessions in which at least one question was answered.

  7. Divide by the total number of sessions minus 2, to help students who missed up to 2 classes.

  8. Normalize that to 10 points and add into total.

  9. Piazza:

    1. Divide the semester into 3 parts: up to first test, from then to last class, and after.
    2. Require two contributions for first part, three for second, and one for last.
    3. Add up the number of contributions (max: 6), normalize to 10 points, add add to total.
  10. Add the number of knowitall points to total.

  11. Convert total to a letter grade per the syllabus.

  12. Upload total and letter grades to LMS.

1.2   Notes

  1. This is guaranteed; your grade cannot be lower (absent detected cheating).
  2. You can compute how latest homeworks would raise it.

1.3   LMS

  1. I uploaded 5 columns to LMS.

  2. There are updated iclicker, piazza, and knowitall numbers.

    They should include all updates.

  3. Your total numerical grade is in Total-423.

  4. Your letter grade is in Grade-423.

  5. Ignore other columns with names like total. They are wrong.

2   Iclicker questions

  1. X and Y are two uniform r.v. on the interval [0,1]. X and Y are independent. Z=X+Y. What is E[Z]?
    1. 0
    2. 1/2
    3. 2/3
  2. Now let W=max(X,Y). What is E[W]?
    1. 0
    2. 1/2
    3. 2/3

3   Material from text

3.1   Section 6.5, page 332: Estimation of random variables

  1. Assume that we want to know X but can only see Y, which depends on X.

  2. This is a generalization of our long-running noisy communication channel example. We'll do things a little more precisely now.

  3. Another application would be to estimate tomorrow's price of GOOG (X) given the prices to date (Y).

  4. Sometimes, but not always, we have a prior probability for X.

  5. For the communication channel we do, for GOOG, we don't.

  6. If we do, it's a ''maximum a posteriori estimator''.

  7. If we don't, it's a ''maximum likelihood estimator''. We effectively assume that that prior probability of X is uniform, even though that may not completely make sense.

  8. You toss a fair coin 3 times. X is the number of heads, from 0 to 3. Y is the position of the 1st head. from 0 to 3. If there are no heads, we'll say that the first head's position is 0.

    (X,Y) p(X,Y)
    (0,0) 1/8
    (1,1) 1/8
    (1,2) 1/8
    (1,3) 1/8
    (2,1) 2/8
    (2,2) 1/8
    (3,1) 1/8

    E.g., 1 head can occur 3 ways (out of 8): HTT, THT, TTH. The 1st (and only) head occurs in position 1, one of those ways. p=1/8.

  9. Conditional probabilities:

    p(x|y) y=0 y=1 y=2 y=3
    x=0 1 0 0 0
    x=1 0 1/4 1/2 1
    x=2 0 1/2 1/2 0
    x=3 0 1/4 0 0
             
    $g_{MAP}(y)$ 0 2 1 or 2 1
    $P_{error}(y)$ 0 1/2 1/2 0
    p(y) 1/8 1/2 1/4 1/8

    The total probability of error is 3/8.

  10. We observe Y and want to guess X from Y. E.g., If we observe $$\small y= \begin{pmatrix}0\\1\\2\\3\end{pmatrix} \text{then } x= \begin{pmatrix}0\\ 2 \text{ most likely} \\ 1, 2 \text{ equally likely} \\ 1 \end{pmatrix}$$

  11. There are different formulae. The above one was the MAP, maximum a posteriori probability.

    $$g_{\text{MAP}} (y) = \max_x p_x(x|y) \text{ or } f_x(x|y)$$

    That means, the value of $x$ that maximizes $p_x(x|y)$

  12. What if we don't know p(x|y)? If we know p(y|x), we can use Bayes. We might measure p(y|x) experimentally, e.g., by sending many messages over the channel.

  13. Bayes requires p(x). What if we don't know even that? E.g. we don't know the probability of the different possible transmitted messages.

  14. Then use maximum likelihood estimator, ML. $$g_{\text{ML}} (y) = \max_x p_y(y|x) \text{ or } f_y(y|x)$$

  15. There are other estimators for different applications. E.g., regression using least squares might attempt to predict a graduate's QPA from his/her entering SAT scores. At Saratoga in August we might attempt to predict a horse's chance of winning a race from its speed in previous races. Some years ago, an Engineering Assoc Dean would do that each summer.

  16. Historically, IMO, some of the techniques, like least squares and logistic regression, have been used more because they're computationally easy than because they're logically justified.

3.2   Central limit theorem etc

  1. Review: Almost no matter what distribution the random variable X is, $F_{M_n}$ quickly becomes Gaussian as n increases. n=5 already gives a good approximation.
  2. nice applets:
    1. http://onlinestatbook.com/stat_sim/normal_approx/index.html This tests how good is the normal approximation to the binomial distribution.
    2. http://onlinestatbook.com/stat_sim/sampling_dist/index.html This lets you define a distribution, and take repeated samples of a given size. It shows how the means of the samples are distributed. For sample with more than a few observations, they look fairly normal.
    3. http://www.umd.umich.edu/casl/socsci/econ/StudyAids/JavaStat/CentralLimitTheorem.html This might also be interesting.
  3. Sample problems.
    1. Problem 7.1 on page 402.
    2. Problem 7.22.
    3. Problem 7.25.

3.3   Chapter 7, p 359, Sums of Random Variables

The long term goal of this section is to summarize information from a large group of random variables. E.g., the mean is one way. We will start with that, and go farther.

The next step is to infer the true mean of a large set of variables from a small sample.

3.4   Sums of random variables ctd

  1. Let Z=X+Y.
  2. $f_Z$ is convolution of $f_X$ and $f_Y$: $$f_Z(z) = (f_X * f_Y)(z)$$ $$f_Z(z) = \int f_X(x) f_Y(z-x) dx$$
  3. Characteristic functions are useful. $$\Phi_X(\omega) = E[e^{j\omega X} ]$$
  4. $\Phi_Z = \Phi_X \Phi_Y$.
  5. This extends to the sum of n random variables: if $Z=\sum_i X_i$ then $\Phi_Z (\omega) = \Pi_i \Phi_{X_i} (\omega)$
  6. E.g. Exponential with $\lambda=1$: $\Phi_1(\omega) = 1/(1-j\omega)$ (page 164).
  7. Sum of m exponentials has $\Phi(\omega)= 1/{(1-j\omega)}^m$. That's called an m-Erlang.
  8. Example 2: sum of n iid Bernoullis. Probability generating function is more useful for discrete random variables.
  9. Example 3: sum of n iid Gaussians. $$\Phi_{X_1} = e^{j\mu\omega - \frac{1}{2} \sigma^2 \omega^2}$$ $$\Phi_{Z} = e^{jn\mu\omega - \frac{1}{2}n \sigma^2 \omega^2}$$ I.e., mean and variance sum.
  10. As the number increases, no matter what distribution the initial random variance is (provided that its moments are finite), for the sum $\Phi$ starts looking like a Gaussian.
  11. The mean $M_n$ of n random variables is itself a random variable.
  12. As $n\rightarrow\infty$ $M_n \rightarrow \mu$.
  13. That's a law of large numbers (LLN).
  14. $E[ M_n ] = \mu$. It's an unbiased estimator.
  15. $VAR[ M_n ] = n \sigma ^2$
  16. Weak law of large numbers $$\forall \epsilon >0 \lim_{n\rightarrow\infty} P[ |M_n-\mu| < \epsilon] = 1$$
  17. How fast does it happen? We can use Chebyshev, though that is very conservative.
  18. Strong law of large numbers $$P [ \lim _ {n\rightarrow\infty} M_n = \mu ] =1$$
  19. As $n\rightarrow\infty$, $F_{M_n}$ becomes Gaussian. That's the Central Limit Theorem (CLT).

3.5   Chapter 8, Statistics

  1. We have a population. (E.g., voters in next election, who will vote Democrat or Republican).

  2. We don't know the population mean. (E.g., fraction of voters who will vote Democrat).

  3. We take several samples (observations). From them we want to estimate the population mean and standard deviation. (Ask 1000 potential voters; 520 say they will vote Democrat. Sample mean is .52)

  4. We want error bounds on our estimates. (.52 plus or minus .04, 95 times out of 100)

  5. Another application: testing whether 2 populations have the same mean. (Is this batch of Guiness as good as the last one?)

  6. Observations cost money, so we want to do as few as possible.

  7. This gets beyond this course, but the biggest problems may be non-math ones. E.g., how do you pick a random likely voter? In the past phone books were used. In a famous 1936 Presidential poll, that biased against poor people, who voted for Roosevelt.

  8. In probability, we know the parameters (e.g., mean and standard deviation) of a distribution and use them to compute the probability of some event.

    E.g., if we toss a fair coin 4 times what's the probability of exactly 4 heads? Answer: 1/16.

  9. In statistics we do not know all the parameters, though we usually know that type the distribution is, e.g., normal. (We often know the standard deviation.)

    1. We make observations about some members of the distribution, i.e., draw some samples.
    2. From them we estimate the unknown parameters.
    3. We often also compute a confidence interval on that estimate.
    4. E.g., we toss an unknown coin 100 times and see 60 heads. A good estimate for the probability of that coin coming up heads is 0.6.
  10. Some estimators are better than others, though that gets beyond this course.

    1. Suppose I want to estimate the average height of an RPI student by measuring the heights of N random students.
    2. The mean of the highest and lowest heights of my N students would converge to the population mean as N increased.
    3. However the median of my sample would converge faster. Technically, the variance of the sample median is smaller than the variance of the sample hi-lo mean.
    4. The mean of my whole sample would converge the fastest. Technically, the variance of the sample mean is smaller than the variance of any other estimator of the population mean. That's why we use it.
    5. However perhaps the population's distribution is not normal. Then one of the other estimators might be better. It would be more robust.
  11. (Enrichment) How to tell if the population is normal? We can do various plots of the observations and look. We can compute the probability that the observations would be this uneven if the population were normal.

  12. An estimator may be biased. We have an distribution that is U[0,b] for unknown b. We take a sample. The max of the sample has a mean n/(n+1)b though it converges to b as n increases.

  13. Example 8.2, page 413: One-tailed probability. This is the probability that the mean of our sample is at least so far above the population mean. $$\alpha = P[\overline{X_n}-\mu > c] = Q\left( \frac{c}{\sigma_x / \sqrt{n} } \right)$$ Q is defined on page 169: $$Q(x) = \int_x^ { \infty} \frac{1}{\sqrt{2\pi} } e^{-\frac{x^2}{2} } dx$$

  14. Application: You sample n=100 students' verbal SAT scores, and see $ \overline{X} = 550$. You know that $\sigma=100$. If $\mu = 525$, what is the probability that $\overline{X_n} > 550$ ?

    Answer: Q(2.5) = 0.006

  15. This means that if we take 1000 random sample of students, each with 100 students, and measure each sample's mean, then, on average, 6 of those 1000 samples will have a mean over 550.

  16. This is often worded as the probability of the population's mean being under 525 is 0.006, which is different. The problem with saying that is that presumes some probability distribution for the population mean.

  17. The formula also works for the other tail, computing the probability that our sample mean is at least so far below the population mean.

  18. The 2-tail probability is the probability that our sample mean is at least this far away from the sample mean in either direction. It is twice the 1-tail probability.

  19. All this also works when you know the probability and want to know c, the cutoff.

3.6   Hypothesis testing

  1. Say we want to test whether the average height of an RPI student (called the population) is 2m.
  2. We assume that the distribution is Gaussian (normal) and that the standard deviation of heights is, say, 0.2m.
  3. However we don't know the mean.
  4. We do an experiment and measure the heights of n=100 random students. Their mean height is, say, 1.9m.
  5. The question on the table is, is the population mean 2m?
  6. This is different from the earlier question that we analyzed, which was this: What is the most likely population mean? (Answer: 1.9m.)
  7. Now we have a hypothesis (that the population mean is 2m) that we're testing.
  8. The standard way that this is handled is as follows.
  9. Define a null hypothesis, called H0, that the population mean is 2m.
  10. Define an alternate hypothesis, called HA, that the population mean is not 2m.
  11. Note that we observed our sample mean to be $0.5 \sigma$ below the population mean, if H0 is true.
  12. Each time we rerun the experiment (measure 100 students) we'll observe a different number.
  13. We compute the probability that, if H0 is true, our sample mean would be this far from 2m.
  14. Depending on what our underlying model of students is, we might use a 1-tail or a 2-tail probability.
  15. Perhaps we think that the population mean might be less than 2m but it's not going to be more. Then a 1-tail distribution makes sense.
  16. That is, our assumptions affect the results.
  17. The probability is Q(5), which is very small.
  18. Therefore we reject H0 and accept HA.
  19. We make a type-1 error if we reject H0 and it was really true. See http://en.wikipedia.org/wiki/Type_I_and_type_II_errors
  20. We make a type-2 error if we accept H0 and it was really false.
  21. These two errors trade off: by reducing the probability of one we increase the probability of the other, for a given sample size.
  22. E.g. in a criminal trial we prefer that a guilty person go free to having an innocent person convicted.
  23. Rejecting H0 says nothing about what the population mean really is, just that it's not likely 2m.
  24. (Enrichment) Random sampling is hard. The US government got it wrong here:
    http://politics.slashdot.org/story/11/05/13/2249256/Algorithm-Glitch-Voids-Outcome-of-US-Green-Card-Lottery