Engineering Probability Class 29 Mon 2018-05-10

This is not an actual class, but a place to present info about the grading.

1   Final grading notes

I made some grade formula changes to so that your final total would not go down.

  1. Make full points for piazza continue to be 5, not 6. So, some students could go over full points here, but allowing that seemed better than clipping it at 5.
  2. Make full points for iclickers be 9 although there were 14 iclicker days.
  3. Something went wrong on the last iclicker day, so everyone got a point, although half the class was absent.
  4. Use these changes and exam 3 (main and conflict) to compute a total grade 509 (altho it's computed on 510).
  5. BTW some students did increase their grades by writing exam 3.
  6. I'd uploaded earlier total grades on 423, 501, and 507.
  7. Make the new total total510=max(total423, total501, total507, total509).
  8. Use the grade cutoffs in the syllabus.
  9. This gives a course GPA=3.3. That's not so bad for a 2000-level course.
  10. I uploaded total510, grade510, and exam3normalized to LMS.

2   Closing remarks

  1. I enjoyed teaching this, and hope you learned some fun and useful stuff.
  2. I'm available in the future to discuss and advise any legal ethical topics, such as career advice or ideas about problems you may have.

Engineering Probability Exam 3 - Tues 2018-05-08

Name, RCSID:

.



.

Rules:

  1. You have 80 minutes.
  2. You may bring three 2-sided 8.5"x11" papers with notes.
  3. You may bring a calculator.
  4. You may not share material with each other during the exam.
  5. No collaboration or communication (except with the staff) is allowed.
  6. Check that your copy of this test has all nine pages.
  7. Each part of a question is worth 5 points.
  8. You may cross out three question parts, which will not be graded.
  9. When answering a question, don't just state your answer, prove it.

Questions:

  1. You toss two coins. Each comes up heads half of the time. However, for some funny reason, they both come up heads together, or both come up tails together. Intuitively, they not independent. This question is to prove that from the definition of independence.

    .
    
    
    
    
    
    
    
    
    
    
    
    .
    
  2. This time, you toss three coins, A, B, and C. These are the probabilities:

    P[TTT] = P[THH] = P[HTH] = P[HHT] = 0

    P[TTH] = P[THT] = P[HTT] = P[HHH] = 1/4

    My notation is that TTH means that coin A is tails, coin B tails, and coin C heads. Etc.

    1. Are the individual coins fair (i.e., heads half the time)?

      .
      
      
      
      
      
      
      
      
      
      
      
      .
      
    2. Are coins A and B independent?

      .
      
      
      
      
      
      
      
      
      
      
      
      .
      
    3. Are all 3 coins independent?

      .
      
      
      
      
      
      
      
      
      
      
      
      .
      
  3. This question is about a continuous probability distribution on 2 variables.

    $$f_{XY}(x,y) = \begin{cases} c x y & \text{ if } (0\le x) \ \& \ (0\le y)\ \& \ (0\le x+y \le 1) \\ 0 & \text{ otherwise}\end{cases}$$

    The nonzero region is the triangle with vertices (0,0), (1,0) and (0,1).

    c is some constant, but I didn't tell you what it is.

    1. What is c?

      .
      
      
      
      
      
      
      
      
      
      
      
      .
      
    2. What is $F_{XY}(x,y)$?

      .
      
      
      
      
      
      
      
      
      
      
      
      .
      
    3. What is $f_X(x)$?

      .
      
      
      
      
      
      
      
      
      
      
      
      .
      
    4. Are X and Y independent?

      .
      
      
      
      
      
      
      
      
      
      
      
      .
      
    5. What is $P[X\le Y]$ ?

      .
      
      
      
      
      
      
      
      
      
      
      
      .
      
    6. Define a new random variable $Z=X+Y$. What is $F_Z(z)$?

      .
      
      
      
      
      
      
      
      
      
      
      
      .
      
    7. What is $E[X]$?

      .
      
      
      
      
      
      
      
      
      
      
      
      .
      
    8. What is $COV[X,Y]$?

      .
      
      
      
      
      
      
      
      
      
      
      
      .
      
    9. What is $\rho_{X,Y}$?

      .
      
      
      
      
      
      
      
      
      
      
      
      .
      
    10. What is $f_Y(y|x)$?

      .
      
      
      
      
      
      
      
      
      
      
      
      .
      
    11. What is $E[Y|x]$?

      .
      
      
      
      
      
      
      
      
      
      
      
      .
      
  4. Compute $$\int_0^\infty e^{-x^2} dx$$ .

    .
    
    
    
    
    
    
    
    
    
    
    
    .
    
  5. What is the legal range for a correlation coefficient?

    .
    
    
    
    
    
    
    
    
    
    
    
    .
    
  6. What is the variance of the sum of 100 independent variables, each of which is N(0,1)?

    .
    
    
    
    
    
    
    
    
    
    
    
    .
    
  7. You have 10 independent random variables. Each is uniform on [0,1]. What is the expected value of the max?

    .
    
    
    
    
    
    
    
    
    
    
    
    .
    
  8. You toss 10 independent fair coins, one after the other.

    1. What is the expected total number of heads, given that the first 5 coins came up heads?

      .
      
      
      
      
      
      
      
      
      
      
      
      .
      
    2. What is the probability of the total number of heads being 10, given that the first 5 coins came up heads?

      .
      
      
      
      
      
      
      
      
      
      
      
      .
      

End of exam 3, total 90 points (considering that 3 questions aren't graded).

Engineering Probability Exam 3 solution - Tues 2018-05-08

Name, RCSID: W. Randolph Franklin, frankwr

OK to give the formulas w/o working them out.

Rules:

  1. You have 80 minutes.
  2. You may bring three 2-sided 8.5"x11" papers with notes.
  3. You may bring a calculator.
  4. You may not share material with each other during the exam.
  5. No collaboration or communication (except with the staff) is allowed.
  6. Check that your copy of this test has all nine pages.
  7. Each part of a question is worth 5 points.
  8. You may cross out three question parts, which will not be graded.
  9. When answering a question, don't just state your answer, prove it.

Questions:

  1. You toss two coins. Each comes up heads half of the time. However, for some funny reason, they both come up heads together, or both come up tails together. Intuitively, they not independent. This question is to prove that from the definition of independence.

    P[HT] = 0. However P[A=H] = P[B=T] = 1/2, so P[HT] != P[A=H]P[B=T]. That's the def of not independent.

  2. This time, you toss three coins, A, B, and C. These are the probabilities:

    P[TTT] = P[THH] = P[HTH] = P[HHT] = 0

    P[TTH] = P[THT] = P[HTT] = P[HHH] = 1/4

    My notation is that TTH means that coin A is tails, coin B tails, and coin C heads. Etc.

    1. Are the individual coins fair (i.e., heads half the time)?

      P[A=H] = 0+0+1/4+1/4 = 1/2 so fair. Ditto B and C.

    2. Are coins A and B independent?

      P[A=H,B=H] = 1/4, good. P[A=H,B=T]=1/4, good. P[TH] = 1/4, good. P[TT] = 1/4, good. Yes independent.

    3. Are all 3 coins independent?

      P[HHH] = 1/4 != P[A=H]P[B=H]P[C=H] = 1/8. Not independent.

  3. This question is about a continuous probability distribution on 2 variables.

    $$f_{XY}(x,y) = \begin{cases} c x y & \text{ if } (0\le x) \ \& \ (0\le y)\ \& \ (0\le x+y \le 1) \\ 0 & \text{ otherwise}\end{cases}$$

    The nonzero region is the triangle with vertices (0,0), (1,0) and (0,1).

    c is some constant, but I didn't tell you what it is.

    1. What is c?

      $\int_0^1\int_0^{1-x} xy\ dy\ dx = 1/24$ so $c=24$

    2. What is $F_{XY}(x,y)$?

      $$F_{XY}(x,y)=\begin{cases} 0 & \text{ if } x\le0 \cup y\le0 \\ 1 & \text{ if } x\ge 1 \cap y\ge1 \\ 6x^2y^2 & \text{ if } 0\le x \cap 0\le y \cap x+y\le1 \\ (\int_0^x\int_0^{1-x} + \int_0^{1-y}\int_{1-x}^y + \int_{1-y}^x\int_{1-x}^{1-x_0}) (12x_0y_0 dy_0dx_0) & \text{ otherwise}\end{cases}$$

      The last case above splits the nonzero integration region into two rectangles and a triangle.

      It's also acceptable to draw a figure and say something intelligent w/o being explicit about all the details.

    3. What is $f_X(x)$?

      $f_X(x)= \int_0^{1-x}f_{XY}(x,y) dy = 12x(1-x)^2$

      Note that $\int_0^1 f_X(x)=1$, which is correct.

    4. Are X and Y independent?

      $f_X(x)=12x(1-x)^2,f_Y(y)=12y(1-y)^2,f_X(x)f_Y(y)\ne f_{XY}(x,y)$

      No.

    5. What is $P[X\le Y]$ ?

      $\int_0^1\int_0^{\min(x,1-x)} f_{XY}(x,y) dy\ dx$. However, since $f_{XY}(x,y) = f_{XY}(y,x)$, $P[X\le Y]=1/2$

    6. Define a new random variable $Z=X+Y$. What is $F_Z(z)$?

      $f_Z(z) = \int_0^z f_{XY}(x,z-x) dx = 24\int_0^z x(z-x)dx$ for $0\le z\le 1$

      $F_Z(z) = \int f_Z(z)dz$

    7. What is $E[X]$?

      $\int_0^1 xf_X(x)dx = 2/5$

    8. What is $COV[X,Y]$?

      E[XY] = $\int_0^1\int_0^{1-x}xyf_{XY}dx dy=8\int_0^1x^2(1-x)^4 dx, E[X]=E[Y]=2/5$, COV[X,Y]=E[XY]-E[X]E[Y]. You don't have to work through the math.

    9. What is $\rho_{X,Y}$?

      $\sigma_X=\sigma_Y= E[X^2]-E[X]^2$

      $\rho_{X,Y}=COV[X,Y]/(\sigma_X\sigma_Y)$

    10. What is $f_Y(y|x)$?

      $f_Y(y|x)=f(x,y)/f(x) = 12xy/(4x^3) = 2\frac{y}{x^2}$

    11. What is $E[Y|x]$?

      Integrate the above over $y$ to get $x^{-2}$

  4. Compute $$\int_0^\infty e^{-x^2} dx$$ .

    Consider $\sigma=1/\sqrt{2}$. Working a little, this will give $\int_0^\infty e^{-x^2} dx=\sqrt{\pi}/2=0.886$. It was also ok just to use a calculator.

  5. What is the legal range for a correlation coefficient?

    -1 to 1

  6. What is the variance of the sum of 100 independent variables, each of which is N(0,1)?

    100.

  7. You have 10 independent random variables. Each is uniform on [0,1]. What is the expected value of the max?

    Let $W=\max(X_i). F_W(w)=w^{10}. f_W(w)=10w^9.E[W]=10/11.$

  8. You toss 10 independent fair coins, one after the other.

    1. What is the expected total number of heads, given that the first 5 coins came up heads?

      The last 5 coins do not depend on the first 5. So the expectation is 7.5.

    2. What is the probability of the total number of heads being 10, given that the first 5 coins came up heads?

      1/32

End of exam 3, total 90 points (considering that 3 questions aren't graded).

Engineering Probability Class 28 Mon 2018-04-30

1   Grades

  1. I think I've responded to all grade emails. Please resend any that I overlooked.
  2. Any that hasn't been complained about is resumed to be correct.
  3. The conflict exam is Thurs May 10 at 3pm, in a room TBD. It is open only to students with conflicts who wrote me. If you're one of those students, but you don't plan to write it, then please tell me. E.g., a smaller room might then suffice.
  4. We'll try to get updated guaranteed grades uploaded, so you can decide whether to write the final exam.

2   Material from text

2.1   Hypothesis testing

  1. Say we want to test whether the average height of an RPI student (called the population) is 2m.
  2. We assume that the distribution is Gaussian (normal) and that the standard deviation of heights is, say, 0.2m.
  3. However we don't know the mean.
  4. We do an experiment and measure the heights of n=100 random students. Their mean height is, say, 1.9m.
  5. The question on the table is, is the population mean 2m?
  6. This is different from the earlier question that we analyzed, which was this: What is the most likely population mean? (Answer: 1.9m.)
  7. Now we have a hypothesis (that the population mean is 2m) that we're testing.
  8. The standard way that this is handled is as follows.
  9. Define a null hypothesis, called H0, that the population mean is 2m.
  10. Define an alternate hypothesis, called HA, that the population mean is not 2m.
  11. Note that we observed our sample mean to be $0.5 \sigma$ below the population mean, if H0 is true.
  12. Each time we rerun the experiment (measure 100 students) we'll observe a different number.
  13. We compute the probability that, if H0 is true, our sample mean would be this far from 2m.
  14. Depending on what our underlying model of students is, we might use a 1-tail or a 2-tail probability.
  15. Perhaps we think that the population mean might be less than 2m but it's not going to be more. Then a 1-tail distribution makes sense.
  16. That is, our assumptions affect the results.
  17. The probability is Q(5), which is very small.
  18. Therefore we reject H0 and accept HA.
  19. We make a type-1 error if we reject H0 and it was really true. See http://en.wikipedia.org/wiki/Type_I_and_type_II_errors
  20. We make a type-2 error if we accept H0 and it was really false.
  21. These two errors trade off: by reducing the probability of one we increase the probability of the other, for a given sample size.
  22. E.g. in a criminal trial we prefer that a guilty person go free to having an innocent person convicted.
  23. Rejecting H0 says nothing about what the population mean really is, just that it's not likely 2m.
  24. Enrichment: Random sampling is hard. The US government got it wrong here: http://politics.slashdot.org/story/11/05/13/2249256/Algorithm-Glitch-Voids-Outcome-of-US-Green-Card-Lottery
  25. Example 8.1 page 412.
  26. Example 8.21 page 442.
  27. Example 8.23.

3   Iclicker questions

  1. Suppose that RPI students' heights have mean 1.8m and standard deviation 0.2m. (These are fictitious numbers.)

    You measure a sample of 16 students, and compute the sample mean $m$.

    What is E[m]?

    1. 10
    2. .2
    3. .05
    4. 9.8
    5. 2.5
  2. What is STD[m]?

    1. 10
    2. .2
    3. .05
    4. 9.8
    5. 2.5

4   Counterintuitive things in statistics

Statistics has some surprising examples, which would appear to be impossible. Here are some.

  1. Average income can increase faster in a whole country than in any part of the country.

    1. Consider a country with two parts: east and west.
    2. Each part has 100 people.
    3. Each person in the west makes \$100 per year; each person in the east \$200.
    4. The total income in the west is \$10K, in the east \$20K, and in the whole country \$30K.
    5. The average income in the west is \$100, in the east \$200, and in the whole country \$150.
    6. Assume that next year nothing changes except that one westerner moves east and gets an average eastern job, so he now makes \$200 instead of \$100.
    7. The west now has 99 people @ \$100; its average income didn't change.
    8. The east now has 101 people @ \$200; its average income didn't change.
    9. The whole country's income is \$30100 for an average of \$150.50; that went up.
  2. College acceptance rate surprise.

    1. Imagine that we have two groups of people: Albanians and Bostonians.

    2. They're applying to two programs at the university: Engineering and Humanities.

    3. Here are the numbers. The fractions are accepted/applied.

      city-major Engin Human Total
      Albanians 11/15 2/5 13/20
      Bostonians 4/5 7/15 11/20
      Total 15/20 9/20 24/40

      E.g, 15 Albanians applied to Engin; 11 were accepted.

    4. Note that in Engineering, a smaller fraction of Albanian applicants were accepted than Bostonian applicants. (corrected)

    5. Ditto in Humanities.

    6. However in all, a larger fraction of Albanian applicants were accepted than Bostonian applicants.

  3. I could go on.

Engineering Probability Class 27 Thurs 2018-04-26

1   Iclicker questions

  1. Experiment: toss two fair coins, one after the other. Observe two random variables:

    1. X is the number of heads.
    2. Y is the when the first head occurred, with 0 meaning both coins were tails.

    What is P[X=1]?

    1. 0
    2. 1/4
    3. 1/2
    4. 3/4
    5. 1
  2. What is P[Y=1]?

    1. 0
    2. 1/4
    3. 1/2
    4. 3/4
    5. 1
  3. What is P[Y=1 & X=1]?

    1. 0
    2. 1/4
    3. 1/2
    4. 3/4
    5. 1
  4. What is P[Y=1|X=1]?

    1. 0
    2. 1/4
    3. 1/2
    4. 3/4
    5. 1
  5. What is P[X=1|Y=1]?

    1. 0
    2. 1/4
    3. 1/2
    4. 3/4
    5. 1
  6. What's the MAP estimator for X given Y=2?

    1. 0
    2. 1

2   Material from text

2.1   Central limit theorem etc

  1. Review: Almost no matter what distribution the random variable X is, $F_{M_n}$ quickly becomes Gaussian as n increases. n=5 already gives a good approximation.
  2. nice applets:
    1. http://onlinestatbook.com/stat_sim/normal_approx/index.html This tests how good is the normal approximation to the binomial distribution.
    2. http://onlinestatbook.com/stat_sim/sampling_dist/index.html This lets you define a distribution, and take repeated samples of a given size. It shows how the means of the samples are distributed. For sample with more than a few observations, they look fairly normal.
    3. http://www.umd.umich.edu/casl/socsci/econ/StudyAids/JavaStat/CentralLimitTheorem.html This might also be interesting.
  3. Sample problems.
    1. Problem 7.1 on page 402.
    2. Problem 7.22.
    3. Problem 7.25.

2.2   Chapter 7, p 359, Sums of Random Variables

The long term goal of this section is to summarize information from a large group of random variables. E.g., the mean is one way. We will start with that, and go farther.

The next step is to infer the true mean of a large set of variables from a small sample.

2.3   Sums of random variables ctd

  1. Let Z=X+Y.
  2. $f_Z$ is convolution of $f_X$ and $f_Y$: $$f_Z(z) = (f_X * f_Y)(z)$$ $$f_Z(z) = \int f_X(x) f_Y(z-x) dx$$
  3. Characteristic functions are useful. $$\Phi_X(\omega) = E[e^{j\omega X} ]$$
  4. $\Phi_Z = \Phi_X \Phi_Y$.
  5. This extends to the sum of n random variables: if $Z=\sum_i X_i$ then $\Phi_Z (\omega) = \Pi_i \Phi_{X_i} (\omega)$
  6. E.g. Exponential with $\lambda=1$: $\Phi_1(\omega) = 1/(1-j\omega)$ (page 164).
  7. Sum of m exponentials has $\Phi(\omega)= 1/{(1-j\omega)}^m$. That's called an m-Erlang.
  8. Example 2: sum of n iid Bernoullis. Probability generating function is more useful for discrete random variables.
  9. Example 3: sum of n iid Gaussians. $$\Phi_{X_1} = e^{j\mu\omega - \frac{1}{2} \sigma^2 \omega^2}$$ $$\Phi_{Z} = e^{jn\mu\omega - \frac{1}{2}n \sigma^2 \omega^2}$$ I.e., mean and variance sum.
  10. As the number increases, no matter what distribution the initial random variance is (provided that its moments are finite), for the sum $\Phi$ starts looking like a Gaussian.
  11. The mean $M_n$ of n random variables is itself a random variable.
  12. As $n\rightarrow\infty$ $M_n \rightarrow \mu$.
  13. That's a law of large numbers (LLN).
  14. $E[ M_n ] = \mu$. It's an unbiased estimator.
  15. $VAR[ M_n ] = n \sigma ^2$
  16. Weak law of large numbers $$\forall \epsilon >0 \lim_{n\rightarrow\infty} P[ |M_n-\mu| < \epsilon] = 1$$
  17. How fast does it happen? We can use Chebyshev, though that is very conservative.
  18. Strong law of large numbers $$P [ \lim _ {n\rightarrow\infty} M_n = \mu ] =1$$
  19. As $n\rightarrow\infty$, $F_{M_n}$ becomes Gaussian. That's the Central Limit Theorem (CLT).

2.4   Chapter 8, Statistics

  1. We have a population. (E.g., voters in next election, who will vote Democrat or Republican).

  2. We don't know the population mean. (E.g., fraction of voters who will vote Democrat).

  3. We take several samples (observations). From them we want to estimate the population mean and standard deviation. (Ask 1000 potential voters; 520 say they will vote Democrat. Sample mean is .52)

  4. We want error bounds on our estimates. (.52 plus or minus .04, 95 times out of 100)

  5. Another application: testing whether 2 populations have the same mean. (Is this batch of Guiness as good as the last one?)

  6. Observations cost money, so we want to do as few as possible.

  7. This gets beyond this course, but the biggest problems may be non-math ones. E.g., how do you pick a random likely voter? In the past phone books were used. In a famous 1936 Presidential poll, that biased against poor people, who voted for Roosevelt.

  8. In probability, we know the parameters (e.g., mean and standard deviation) of a distribution and use them to compute the probability of some event.

    E.g., if we toss a fair coin 4 times what's the probability of exactly 4 heads? Answer: 1/16.

  9. In statistics we do not know all the parameters, though we usually know that type the distribution is, e.g., normal. (We often know the standard deviation.)

    1. We make observations about some members of the distribution, i.e., draw some samples.
    2. From them we estimate the unknown parameters.
    3. We often also compute a confidence interval on that estimate.
    4. E.g., we toss an unknown coin 100 times and see 60 heads. A good estimate for the probability of that coin coming up heads is 0.6.
  10. Some estimators are better than others, though that gets beyond this course.

    1. Suppose I want to estimate the average height of an RPI student by measuring the heights of N random students.
    2. The mean of the highest and lowest heights of my N students would converge to the population mean as N increased.
    3. However the median of my sample would converge faster. Technically, the variance of the sample median is smaller than the variance of the sample hi-lo mean.
    4. The mean of my whole sample would converge the fastest. Technically, the variance of the sample mean is smaller than the variance of any other estimator of the population mean. That's why we use it.
    5. However perhaps the population's distribution is not normal. Then one of the other estimators might be better. It would be more robust.
  11. (Enrichment) How to tell if the population is normal? We can do various plots of the observations and look. We can compute the probability that the observations would be this uneven if the population were normal.

  12. An estimator may be biased. We have an distribution that is U[0,b] for unknown b. We take a sample. The max of the sample has a mean n/(n+1)b though it converges to b as n increases.

  13. Example 8.2, page 413: One-tailed probability. This is the probability that the mean of our sample is at least so far above the population mean. $$\alpha = P[\overline{X_n}-\mu > c] = Q\left( \frac{c}{\sigma_x / \sqrt{n} } \right)$$ Q is defined on page 169: $$Q(x) = \int_x^ { \infty} \frac{1}{\sqrt{2\pi} } e^{-\frac{x^2}{2} } dx$$

  14. Application: You sample n=100 students' verbal SAT scores, and see $ \overline{X} = 550$. You know that $\sigma=100$. If $\mu = 525$, what is the probability that $\overline{X_n} > 550$ ?

    Answer: Q(2.5) = 0.006

  15. This means that if we take 1000 random sample of students, each with 100 students, and measure each sample's mean, then, on average, 6 of those 1000 samples will have a mean over 550.

  16. This is often worded as the probability of the population's mean being under 525 is 0.006, which is different. The problem with saying that is that presumes some probability distribution for the population mean.

  17. The formula also works for the other tail, computing the probability that our sample mean is at least so far below the population mean.

  18. The 2-tail probability is the probability that our sample mean is at least this far away from the sample mean in either direction. It is twice the 1-tail probability.

  19. All this also works when you know the probability and want to know c, the cutoff.

Engineering Probability Homework 11 due Mon 2018-04-30 2359 EST

How to submit

Submit to LMS; see details in syllabus.

Questions

All questions are from the text, starting on page 290.

Each part of a question is worth 5 points.

  1. 5.133 on page 302 (3 parts).
  2. 6.1, p 348. (4 parts)
  3. 6.3 (4 parts).
  4. 6.4 (3 parts).
  5. 6.32, p 352 (2 parts: mean, covariance).

Total: 80 points.

Engineering Probability Class 26 Mon 2018-04-23

1   Grades

1.1   Computation

  1. This will accumulate the total score.

  2. Normalize each homework to 100 points.

    Homeworks that have not yet been graded (that's 9 and up) count for 0.

  3. Sum top 10, multiply result by 0.02, and add into total.

  4. Normalize each exam to 30 points.

  5. Add top 2 into total.

  6. Take the number of sessions in which at least one question was answered.

  7. Divide by the total number of sessions minus 2, to help students who missed up to 2 classes.

  8. Normalize that to 10 points and add into total.

  9. Piazza:

    1. Divide the semester into 3 parts: up to first test, from then to last class, and after.
    2. Require two contributions for first part, three for second, and one for last.
    3. Add up the number of contributions (max: 6), normalize to 10 points, add add to total.
  10. Add the number of knowitall points to total.

  11. Convert total to a letter grade per the syllabus.

  12. Upload total and letter grades to LMS.

1.2   Notes

  1. This is guaranteed; your grade cannot be lower (absent detected cheating).
  2. You can compute how latest homeworks would raise it.

1.3   LMS

  1. I uploaded 5 columns to LMS.

  2. There are updated iclicker, piazza, and knowitall numbers.

    They should include all updates.

  3. Your total numerical grade is in Total-423.

  4. Your letter grade is in Grade-423.

  5. Ignore other columns with names like total. They are wrong.

2   Iclicker questions

  1. X and Y are two uniform r.v. on the interval [0,1]. X and Y are independent. Z=X+Y. What is E[Z]?
    1. 0
    2. 1/2
    3. 2/3
  2. Now let W=max(X,Y). What is E[W]?
    1. 0
    2. 1/2
    3. 2/3

3   Material from text

3.1   Section 6.5, page 332: Estimation of random variables

  1. Assume that we want to know X but can only see Y, which depends on X.

  2. This is a generalization of our long-running noisy communication channel example. We'll do things a little more precisely now.

  3. Another application would be to estimate tomorrow's price of GOOG (X) given the prices to date (Y).

  4. Sometimes, but not always, we have a prior probability for X.

  5. For the communication channel we do, for GOOG, we don't.

  6. If we do, it's a ''maximum a posteriori estimator''.

  7. If we don't, it's a ''maximum likelihood estimator''. We effectively assume that that prior probability of X is uniform, even though that may not completely make sense.

  8. You toss a fair coin 3 times. X is the number of heads, from 0 to 3. Y is the position of the 1st head. from 0 to 3. If there are no heads, we'll say that the first head's position is 0.

    (X,Y) p(X,Y)
    (0,0) 1/8
    (1,1) 1/8
    (1,2) 1/8
    (1,3) 1/8
    (2,1) 2/8
    (2,2) 1/8
    (3,1) 1/8

    E.g., 1 head can occur 3 ways (out of 8): HTT, THT, TTH. The 1st (and only) head occurs in position 1, one of those ways. p=1/8.

  9. Conditional probabilities:

    p(x|y) y=0 y=1 y=2 y=3
    x=0 1 0 0 0
    x=1 0 1/4 1/2 1
    x=2 0 1/2 1/2 0
    x=3 0 1/4 0 0
             
    $g_{MAP}(y)$ 0 2 1 or 2 1
    $P_{error}(y)$ 0 1/2 1/2 0
    p(y) 1/8 1/2 1/4 1/8

    The total probability of error is 3/8.

  10. We observe Y and want to guess X from Y. E.g., If we observe $$\small y= \begin{pmatrix}0\\1\\2\\3\end{pmatrix} \text{then } x= \begin{pmatrix}0\\ 2 \text{ most likely} \\ 1, 2 \text{ equally likely} \\ 1 \end{pmatrix}$$

  11. There are different formulae. The above one was the MAP, maximum a posteriori probability.

    $$g_{\text{MAP}} (y) = \max_x p_x(x|y) \text{ or } f_x(x|y)$$

    That means, the value of $x$ that maximizes $p_x(x|y)$

  12. What if we don't know p(x|y)? If we know p(y|x), we can use Bayes. We might measure p(y|x) experimentally, e.g., by sending many messages over the channel.

  13. Bayes requires p(x). What if we don't know even that? E.g. we don't know the probability of the different possible transmitted messages.

  14. Then use maximum likelihood estimator, ML. $$g_{\text{ML}} (y) = \max_x p_y(y|x) \text{ or } f_y(y|x)$$

  15. There are other estimators for different applications. E.g., regression using least squares might attempt to predict a graduate's QPA from his/her entering SAT scores. At Saratoga in August we might attempt to predict a horse's chance of winning a race from its speed in previous races. Some years ago, an Engineering Assoc Dean would do that each summer.

  16. Historically, IMO, some of the techniques, like least squares and logistic regression, have been used more because they're computationally easy than because they're logically justified.

3.2   Central limit theorem etc

  1. Review: Almost no matter what distribution the random variable X is, $F_{M_n}$ quickly becomes Gaussian as n increases. n=5 already gives a good approximation.
  2. nice applets:
    1. http://onlinestatbook.com/stat_sim/normal_approx/index.html This tests how good is the normal approximation to the binomial distribution.
    2. http://onlinestatbook.com/stat_sim/sampling_dist/index.html This lets you define a distribution, and take repeated samples of a given size. It shows how the means of the samples are distributed. For sample with more than a few observations, they look fairly normal.
    3. http://www.umd.umich.edu/casl/socsci/econ/StudyAids/JavaStat/CentralLimitTheorem.html This might also be interesting.
  3. Sample problems.
    1. Problem 7.1 on page 402.
    2. Problem 7.22.
    3. Problem 7.25.

3.3   Chapter 7, p 359, Sums of Random Variables

The long term goal of this section is to summarize information from a large group of random variables. E.g., the mean is one way. We will start with that, and go farther.

The next step is to infer the true mean of a large set of variables from a small sample.

3.4   Sums of random variables ctd

  1. Let Z=X+Y.
  2. $f_Z$ is convolution of $f_X$ and $f_Y$: $$f_Z(z) = (f_X * f_Y)(z)$$ $$f_Z(z) = \int f_X(x) f_Y(z-x) dx$$
  3. Characteristic functions are useful. $$\Phi_X(\omega) = E[e^{j\omega X} ]$$
  4. $\Phi_Z = \Phi_X \Phi_Y$.
  5. This extends to the sum of n random variables: if $Z=\sum_i X_i$ then $\Phi_Z (\omega) = \Pi_i \Phi_{X_i} (\omega)$
  6. E.g. Exponential with $\lambda=1$: $\Phi_1(\omega) = 1/(1-j\omega)$ (page 164).
  7. Sum of m exponentials has $\Phi(\omega)= 1/{(1-j\omega)}^m$. That's called an m-Erlang.
  8. Example 2: sum of n iid Bernoullis. Probability generating function is more useful for discrete random variables.
  9. Example 3: sum of n iid Gaussians. $$\Phi_{X_1} = e^{j\mu\omega - \frac{1}{2} \sigma^2 \omega^2}$$ $$\Phi_{Z} = e^{jn\mu\omega - \frac{1}{2}n \sigma^2 \omega^2}$$ I.e., mean and variance sum.
  10. As the number increases, no matter what distribution the initial random variance is (provided that its moments are finite), for the sum $\Phi$ starts looking like a Gaussian.
  11. The mean $M_n$ of n random variables is itself a random variable.
  12. As $n\rightarrow\infty$ $M_n \rightarrow \mu$.
  13. That's a law of large numbers (LLN).
  14. $E[ M_n ] = \mu$. It's an unbiased estimator.
  15. $VAR[ M_n ] = n \sigma ^2$
  16. Weak law of large numbers $$\forall \epsilon >0 \lim_{n\rightarrow\infty} P[ |M_n-\mu| < \epsilon] = 1$$
  17. How fast does it happen? We can use Chebyshev, though that is very conservative.
  18. Strong law of large numbers $$P [ \lim _ {n\rightarrow\infty} M_n = \mu ] =1$$
  19. As $n\rightarrow\infty$, $F_{M_n}$ becomes Gaussian. That's the Central Limit Theorem (CLT).

3.5   Chapter 8, Statistics

  1. We have a population. (E.g., voters in next election, who will vote Democrat or Republican).

  2. We don't know the population mean. (E.g., fraction of voters who will vote Democrat).

  3. We take several samples (observations). From them we want to estimate the population mean and standard deviation. (Ask 1000 potential voters; 520 say they will vote Democrat. Sample mean is .52)

  4. We want error bounds on our estimates. (.52 plus or minus .04, 95 times out of 100)

  5. Another application: testing whether 2 populations have the same mean. (Is this batch of Guiness as good as the last one?)

  6. Observations cost money, so we want to do as few as possible.

  7. This gets beyond this course, but the biggest problems may be non-math ones. E.g., how do you pick a random likely voter? In the past phone books were used. In a famous 1936 Presidential poll, that biased against poor people, who voted for Roosevelt.

  8. In probability, we know the parameters (e.g., mean and standard deviation) of a distribution and use them to compute the probability of some event.

    E.g., if we toss a fair coin 4 times what's the probability of exactly 4 heads? Answer: 1/16.

  9. In statistics we do not know all the parameters, though we usually know that type the distribution is, e.g., normal. (We often know the standard deviation.)

    1. We make observations about some members of the distribution, i.e., draw some samples.
    2. From them we estimate the unknown parameters.
    3. We often also compute a confidence interval on that estimate.
    4. E.g., we toss an unknown coin 100 times and see 60 heads. A good estimate for the probability of that coin coming up heads is 0.6.
  10. Some estimators are better than others, though that gets beyond this course.

    1. Suppose I want to estimate the average height of an RPI student by measuring the heights of N random students.
    2. The mean of the highest and lowest heights of my N students would converge to the population mean as N increased.
    3. However the median of my sample would converge faster. Technically, the variance of the sample median is smaller than the variance of the sample hi-lo mean.
    4. The mean of my whole sample would converge the fastest. Technically, the variance of the sample mean is smaller than the variance of any other estimator of the population mean. That's why we use it.
    5. However perhaps the population's distribution is not normal. Then one of the other estimators might be better. It would be more robust.
  11. (Enrichment) How to tell if the population is normal? We can do various plots of the observations and look. We can compute the probability that the observations would be this uneven if the population were normal.

  12. An estimator may be biased. We have an distribution that is U[0,b] for unknown b. We take a sample. The max of the sample has a mean n/(n+1)b though it converges to b as n increases.

  13. Example 8.2, page 413: One-tailed probability. This is the probability that the mean of our sample is at least so far above the population mean. $$\alpha = P[\overline{X_n}-\mu > c] = Q\left( \frac{c}{\sigma_x / \sqrt{n} } \right)$$ Q is defined on page 169: $$Q(x) = \int_x^ { \infty} \frac{1}{\sqrt{2\pi} } e^{-\frac{x^2}{2} } dx$$

  14. Application: You sample n=100 students' verbal SAT scores, and see $ \overline{X} = 550$. You know that $\sigma=100$. If $\mu = 525$, what is the probability that $\overline{X_n} > 550$ ?

    Answer: Q(2.5) = 0.006

  15. This means that if we take 1000 random sample of students, each with 100 students, and measure each sample's mean, then, on average, 6 of those 1000 samples will have a mean over 550.

  16. This is often worded as the probability of the population's mean being under 525 is 0.006, which is different. The problem with saying that is that presumes some probability distribution for the population mean.

  17. The formula also works for the other tail, computing the probability that our sample mean is at least so far below the population mean.

  18. The 2-tail probability is the probability that our sample mean is at least this far away from the sample mean in either direction. It is twice the 1-tail probability.

  19. All this also works when you know the probability and want to know c, the cutoff.

3.6   Hypothesis testing

  1. Say we want to test whether the average height of an RPI student (called the population) is 2m.
  2. We assume that the distribution is Gaussian (normal) and that the standard deviation of heights is, say, 0.2m.
  3. However we don't know the mean.
  4. We do an experiment and measure the heights of n=100 random students. Their mean height is, say, 1.9m.
  5. The question on the table is, is the population mean 2m?
  6. This is different from the earlier question that we analyzed, which was this: What is the most likely population mean? (Answer: 1.9m.)
  7. Now we have a hypothesis (that the population mean is 2m) that we're testing.
  8. The standard way that this is handled is as follows.
  9. Define a null hypothesis, called H0, that the population mean is 2m.
  10. Define an alternate hypothesis, called HA, that the population mean is not 2m.
  11. Note that we observed our sample mean to be $0.5 \sigma$ below the population mean, if H0 is true.
  12. Each time we rerun the experiment (measure 100 students) we'll observe a different number.
  13. We compute the probability that, if H0 is true, our sample mean would be this far from 2m.
  14. Depending on what our underlying model of students is, we might use a 1-tail or a 2-tail probability.
  15. Perhaps we think that the population mean might be less than 2m but it's not going to be more. Then a 1-tail distribution makes sense.
  16. That is, our assumptions affect the results.
  17. The probability is Q(5), which is very small.
  18. Therefore we reject H0 and accept HA.
  19. We make a type-1 error if we reject H0 and it was really true. See http://en.wikipedia.org/wiki/Type_I_and_type_II_errors
  20. We make a type-2 error if we accept H0 and it was really false.
  21. These two errors trade off: by reducing the probability of one we increase the probability of the other, for a given sample size.
  22. E.g. in a criminal trial we prefer that a guilty person go free to having an innocent person convicted.
  23. Rejecting H0 says nothing about what the population mean really is, just that it's not likely 2m.
  24. (Enrichment) Random sampling is hard. The US government got it wrong here:
    http://politics.slashdot.org/story/11/05/13/2249256/Algorithm-Glitch-Voids-Outcome-of-US-Green-Card-Lottery

Engineering Probability Class 25 Thu 2018-04-19

1   Grades

  1. I'll try to upload a guaranteed minimum grade by the end of tomorrow. That will assume that all the grades that I don't yet have are zero.
  2. There will be eleven homeworks.

2   Handwritten notes and homework solutions

I added buttons to the page headers that go directly there.

3   Iclicker questions

  1. What is $$\int_{-\infty}^\infty e^{\big(-\frac{x^2}{2}\big)} dx$$?
    1. 1/2
    2. 1
    3. $2\pi$
    4. $\sqrt{2\pi}$
    5. $1/\sqrt{2\pi}$
  2. What is the largest possible value for a correlation coefficient?
    1. 1/2
    2. 1
    3. $2\pi$
    4. $\sqrt{2\pi}$
    5. $1/\sqrt{2\pi}$
  3. The most reasonable probability distribution for the number of defects on an integrated circuit caused by dust particles, cosmic rays, etc, is
    1. Exponential
    2. Poisson
    3. Normal
    4. Uniform
    5. Binomial
  4. The most reasonable probability distribution for the time until the next request hits your web server is:
    1. Exponential
    2. Poisson
    3. Normal
    4. Uniform
    5. Binomial
  5. If you add two independent normal random variables, each with variance 10, what is the variance of the sum?
    1. 1
    2. $\sqrt2$
    3. 10
    4. $10\sqrt2$
    5. 20

4   Material from text

4.1   6.1.2 Joint Distribution Functions, ctd.

  1. joint cumulative distribution function, p 305.
  2. marginal cdf’s
  3. joint probability mass function
  4. conditional pmf’s
  5. jointly continuous random variables
  6. joint probability density function.
  7. marginal pdf’s
  8. conditional pdf’s
  9. Example 6.7 Multiplicative Sequence, p 308.

4.2   6.1.3 Independence

  1. Example 6.8 Independence.

4.3   6.2 Functions of several random variables

4.3.1   6.2.1 One Function of Several Random Variables

  1. Example 6.9 Maximum and Minimum of n Random Variables

    Apply this to uniform r.v.

  2. Example 6.11 Reliability of Redundant Systems

    Reminder for exponential r.v.:

    1. $f(x) = \lambda e^{-\lambda x}$
    2. $F(x) = 1-e^{-\lambda x}$
    3. $\mu = 1/\lambda$

    I may extend this example to find pdf and mean.

4.3.3   6.2.3 pdf of General Transformations

We skip Section 6.2.3. However, a historical note about Student's T distribution:

Student was a pseudonymn of a mathematician working for Guinness in Ireland. He developed several statistical techniques to sample beer to assure its quality. Guinness didn't let him publish under his real name because these were trade secrets.

4.4   6.3 Expected values of vector random variables

  1. Section 6.3, page 316, extends the covariance to a matrix. Even with N variables, note that we're comparing only pairs of variables. If there were a complicated 3 variable dependency, which could happen (and did in a much earlier example), all the pairwise covariances would be 0.
  2. Note the sequence.
    1. First, the correlation matrix has the expectations of the products.
    2. Then the covariance matrix corrects for the means not being 0.
    3. Finally the correlation coefficents (not shown here) correct for the variances not being 1.

Engineering Probability Homework 10 due Mon 2018-04-23 2359 EST

How to submit

Submit to LMS; see details in syllabus.

You may use any computer SW, like Matlab or Mathematica.

Questions

All questions are from the text, starting on page 290.

Each part of a question is worth 5 points.

  1. (10 pts) Problem 5.94 on page 298.
  2. (5 pts) Problem 5.106 on page 298.
  3. (25 pts) Problem 5.111 on page 299.
  4. (10 pts) Problem 5.120 on page 300.

Total: 50 points.

Engineering Probability Class 24 Mon 2018-04-16

1   Material from text

  1. Example 5.47, page 282: Estimation of signal in noise

    1. This is our perennial example of signal and noise. However, here the signal is not just $\pm1$ but is normal. Our job is to find the most likely input signal for a given output.

    2. Important concept in the noisy channel example (with X and N both being Gaussian): The most likely value of X given Y is not Y but is somewhat smaller, depending on the relative sizes of \(\sigma_X\) and \(\sigma_N\). This is true in spite of \(\mu_N=0\). It would be really useful for you to understand this intuitively. Here's one way:

      If you don't know Y, then the most likely value of X is 0. Knowing Y gives you more information, which you combine with your initial info (that X is \(N(0,\sigma_X)\) to get a new estimate for the most likely X. The smaller the noise, the more valuable is Y. If the noise is very small, then the mostly likely X is close to Y. If the noise is very large (on average) then the most likely X is still close to 0.

2   Tutorial on probability density - 2 variables

In class 15, I tried to motivate the effect of changing one variable on probability density. Here's a try at motivating changing 2 variables.

  1. We're throwing darts uniformly at a one foot square dartboard.
  2. We observe 2 random variables, X, Y, where the dart hits (in Cartesian coordinates).
  3. $$f_{X,Y}(x,y) = \begin{cases} 1& \text{if}\,\, 0\le x\le1 \cap 0\le y\le1\\ 0&\text{otherwise} \end{cases}$$
  4. $$P[.5\le x\le .6 \cap .8\le y\le.9] = \int_{.5}^{.6}\int_{.8}^{.9} f_{XY}(x,y) dx \, dy = 0.01 $$
  5. Transform to centimeters: $$\begin{bmatrix}V\\W\end{bmatrix} = \begin{pmatrix}30&0\\0&30\end{pmatrix} \begin{bmatrix}X\\Y\end{bmatrix}$$
  6. $$f_{V,W}(v,w) = \begin{cases} 1/900& \text{if } 0\le v\le30 \cap 0\le w\le30\\ 0&\text{otherwise} \end{cases}$$
  7. $$P[15\le v\le 18 \cap 24\le w\le27] = \int_{15}^{18}\int_{24}^{27} f_{VW}(v,w)\, dv\, dw = \frac{ (18-15)(27-24) }{900} = 0.01$$
  8. See Section 5.8.3 on page 286.

3   Chapter 6: Vector random variables

  1. Skip the starred sections.
  2. Examples:
    1. arrivals in a multiport switch,
    2. audio signal at different times.
  3. pmf, cdf, marginal pmf and cdf are obvious.
  4. conditional pmf has a nice chaining rule.
  5. For continuous random variables, the pdf, cdf, conditional pdf etc are all obvious.
  6. Independence is obvious.
  7. Work out example 6.5, page 306. The input ports are a distraction. This problem reduces to a multinomial probability where N is itself a random variable.