Engineering Probability Class 20 Thu 2019-03-28

1   Exam 2 stats

  1. Number of students registered: 107
  2. Number of exam2 submissions: 104
  3. Highest score: 80
  4. Lowest score: 10
  5. Average score: 59.9
  6. Median score: 64

2   Normal distribution table

For your convenience. I computed it with Matlab.:

x          f(x)      F(x)      Q(x)
-3.0000    0.0044    0.0013    0.9987
-2.9000    0.0060    0.0019    0.9981
-2.8000    0.0079    0.0026    0.9974
-2.7000    0.0104    0.0035    0.9965
-2.6000    0.0136    0.0047    0.9953
-2.5000    0.0175    0.0062    0.9938
-2.4000    0.0224    0.0082    0.9918
-2.3000    0.0283    0.0107    0.9893
-2.2000    0.0355    0.0139    0.9861
-2.1000    0.0440    0.0179    0.9821
-2.0000    0.0540    0.0228    0.9772
-1.9000    0.0656    0.0287    0.9713
-1.8000    0.0790    0.0359    0.9641
-1.7000    0.0940    0.0446    0.9554
-1.6000    0.1109    0.0548    0.9452
-1.5000    0.1295    0.0668    0.9332
-1.4000    0.1497    0.0808    0.9192
-1.3000    0.1714    0.0968    0.9032
-1.2000    0.1942    0.1151    0.8849
-1.1000    0.2179    0.1357    0.8643
-1.0000    0.2420    0.1587    0.8413
-0.9000    0.2661    0.1841    0.8159
-0.8000    0.2897    0.2119    0.7881
-0.7000    0.3123    0.2420    0.7580
-0.6000    0.3332    0.2743    0.7257
-0.5000    0.3521    0.3085    0.6915
-0.4000    0.3683    0.3446    0.6554
-0.3000    0.3814    0.3821    0.6179
-0.2000    0.3910    0.4207    0.5793
-0.1000    0.3970    0.4602    0.5398
      0    0.3989    0.5000    0.5000
 0.1000    0.3970    0.5398    0.4602
 0.2000    0.3910    0.5793    0.4207
 0.3000    0.3814    0.6179    0.3821
 0.4000    0.3683    0.6554    0.3446
 0.5000    0.3521    0.6915    0.3085
 0.6000    0.3332    0.7257    0.2743
 0.7000    0.3123    0.7580    0.2420
 0.8000    0.2897    0.7881    0.2119
 0.9000    0.2661    0.8159    0.1841
 1.0000    0.2420    0.8413    0.1587
 1.1000    0.2179    0.8643    0.1357
 1.2000    0.1942    0.8849    0.1151
 1.3000    0.1714    0.9032    0.0968
 1.4000    0.1497    0.9192    0.0808
 1.5000    0.1295    0.9332    0.0668
 1.6000    0.1109    0.9452    0.0548
 1.7000    0.0940    0.9554    0.0446
 1.8000    0.0790    0.9641    0.0359
 1.9000    0.0656    0.9713    0.0287
 2.0000    0.0540    0.9772    0.0228
 2.1000    0.0440    0.9821    0.0179
 2.2000    0.0355    0.9861    0.0139
 2.3000    0.0283    0.9893    0.0107
 2.4000    0.0224    0.9918    0.0082
 2.5000    0.0175    0.9938    0.0062
 2.6000    0.0136    0.9953    0.0047
 2.7000    0.0104    0.9965    0.0035
 2.8000    0.0079    0.9974    0.0026
 2.9000    0.0060    0.9981    0.0019
 3.0000    0.0044    0.9987    0.0013

x is often called z.

More info: https://en.wikipedia.org/wiki/Standard_normal_table

3   The large effect of a small bias

This is enrichment material. It is not in the text, and will not be on the exam. However, it might be in a future homework.

Consider tossing $n=10^6$ fair coins.

  1. P[more heads than tails] = 0.5

  2. Now assume that each coin has chance of being heads $p=0.5005$.

    What's P[more heads than tails]?

    1. Approx with a Gaussian. $\mu=500500, \sigma=500$.
    2. Let X be the r.v. for the number of heads.
    3. P[X>500000] = Q(-1) = .84
    4. I.e., increasing the probability of winning 1 toss by 1 part in 1000, increased the probability of winning 1,000,000 tosses from 50% to 84%.
  3. Now assume that 999,000 of the coins are fair, but 1,000 will always be heads.

    What's P[more heads than tails]?

    1. Let X = number of heads in 999,000 tosses.
    2. We want P[X>499,000].
    1. Approx with a Gaussian. $\mu=499,500, \sigma=500$.
    2. P[X>499,000] = Q(-1) = .84 as before.
    3. I.e., fixing 0.1% of the coins increased the probability of winning 1,000,000 tosses from 50% to 84%.

The lesson for fixing elections: you decide.

4   Min, max of 2 r.v.

  1. Example 5.43, page 274.

5   Chapter 6: Vector random variables, page 303-

  1. Skip the starred sections.
  2. Examples:
    1. arrivals in a multiport switch,
    2. audio signal at different times.
  3. pmf, cdf, marginal pmf and cdf are obvious.
  4. conditional pmf has a nice chaining rule.
  5. For continuous random variables, the pdf, cdf, conditional pdf etc are all obvious.
  6. Independence is obvious.
  7. Work out example 6.5, page 306. The input ports are a distraction. This problem reduces to a multinomial probability where N is itself a random variable.