W Randolph Franklin home page
... (old version)
EngProbSpring2010/ home page Login


ECSE-2500, Engineering Probability, Spring 2010, Rensselaer Polytechnic Institute

These pages look better in Firefox than in Explorer.

Lecture 5

  1. 2.4.1, p52. Bayes' rule
    1. Bj partition S. That means that
      1. If {$i\ne j$} then {$ B_i\cap B_j=\emptyset $} and
      2. {$ \bigcup_i B_i = S $}
    2. {$$ P[B_j|A] = \frac{B_j\cap A}{P[A]} = \frac{P[A|B_j] P[B_j]}{\sum_k P[A|B_k] P[B_k]} $$}
    3. application:
      1. We have a priori probs {$ P[B_j] $}
      2. Event A occurs. Knowing that A has happened gives us info that changes the probs.
      3. Compute a posteriori probs {$ P[B_j|A] $}
  2. Example 2.29 comm channel: If receiver sees 1, which input was more probable? (You hope the answer is 1.)
  3. Example 2.30 chip quality control: For example 2.28, how long do we have to burn in chips so that the survivors have a 99% probability of being good? p=0.1, a=1/20000.
  4. Example: False positives in a medical test
    1. T = test for disease was positive; T' = .. negative
    2. D = you have disease; D' = .. don't ..
    3. P[T|D] = .99, P[T'|D'] = .95, P[D] = 0.001
    4. P[D'|T] (false positive) = 0.98 !!!
  5. Example: Pick a cookie, from same page.
  6. 2.5 Independent events
    1. {$ P[A\cap B] = P[A] P[B] $}
    2. P[A|B] = P[A], P[B|A] = P[B]
  7. Example 2.31 4 balls {1b,2b,3w,4w}
    1. event A: black
    2. B: even
    3. C: >2
    Are A,B independent? A,C?
  8. A,B independent means that knowing A doesn't help you with B.
  9. Mutually exclusive events w.p.>0 must be dependent.
  10. Example 2.32 Points in square
    1. A: x>1/2. B: y>1/2. C: x>y
  11. When are 3 events independent?
    1. Each pair is independent.
    2. Also need {$ P[A\cap B\cap C] = P[A] P[B] P[C] $}
  12. Example 2.33. Last condition above is required.
  13. N events are independent iff the occurrance of no combo of the events affects another event.
  14. Common application: independence of experiments in a sequence.
  15. Example 2.34: coin tosses are assumed to be independent of each other.
    P[HHT] = P[1st coin is H] P[2nd is H] P[3rd is T].
  16. Example 2.35 System reliability
    1. Controller and 3 peripherals.
    2. System is up iff controller and at least 2 peripherals are up.
    3. Add a 2nd controller.
  17. 2.6 p59 Sequential experiments: maybe independent
  18. 2.6.1 Sequences of independent experiments
    1. Example 2.36
  19. 2.6.2 Binomial probability
    1. Bernoulli trial flip a possibly unfair coin once. p is probability of head.
    2. (Bernoulli did stats, econ, physics, ... in 18th century.)
  20. Example 2.37
    1. P[TTH] = {$ (1-p)^2 p $}
    2. P[1 head] = {$ 3 (1-p)^2 p $}
  21. Prob of exactly k successes = {$$ p_n(k) = {n \choose k} p^k (1-p)^{n-k} $$}
  22. {$ \sum_{k=0}^n p_n(k) = 1 $}
  23. Example 2.38
  24. Can avoid computing n! by computing {$ p_n(k) $} recursively, or by using approximation. Also, in C++, using double instead of float helps. (Almost always you should use double instead of float. It's the same speed.)
  25. Example 2.39
  26. Example 2.40 Error correction coding
  27. Multinomial probability law
    1. There are M different possible outcomes from an experiment, e.g., faces of a die showing.
    2. Prob of particular outcome: {$p_i$}
    3. Now run the experiment n times.
    4. Prob that i-th outcome occurred {$k_i$} times, {$ \sum_{i=1}^M k_i = n $} {$$ P[(k_1,k_2,...,k_M)] = \frac{n!}{k_1! k_2! ... k_M!} p_1^{k_1} p_2^{k_2}...p_M^{k_M} $$}
  28. Example 2.41 dartboard.
  29. Example 2.42 random phone numbers.
  30. 2.6.4 p63 Geometric probability law
    1. Repeat Bernoulli experiment until 1st success.
    2. Define outcome to be # trials until that happens.
    3. Define q=(1-p).
    4. {$ p(m) = (1-p)^{m-1}p = q^{m-1}p $} (p has 2 different uses here).
    5. {$ \sum_{m=1}^\infty p(m) =1$}
    6. Prob that more than K trials are required = {$q^K$}.
  31. Example: probability that more than 10 tosses of a die are required to get a 6 = {$ \left(\frac{5}{6}\right)^{10} = 0.16 $}
  32. Example 2.43: error control by retransmission
  33. 2.6.5 p64 Sequences, chains, of dependent experiments
  34. Example 2.44
  35. 2.7 Computer generation of random numbers
    1. Skip this section, except for following points.
    2. Executive summary: it's surprisingly hard to generate good random numbers. Commercial SW has been known to get this wrong. By now, they've gotten it right (I hope), so just call a subroutine.
    3. Arizona lottery got it wrong in 1998.
    4. Even random electronic noise is hard to use properly. The best selling 1955 book A Million Random Digits with 100,000 Normal Deviates had trouble generating random numbers this way. Asymmetries crept into their circuits perhaps because of component drift. For a laugh, read the reviews.
    5. Pseudo-random number generator: The subroutine returns numbers according to some algorithm (e.g., it doesn't use cosmic rays), but for your purposes, they're random.
    6. Computer random number routines usually return the same sequence of number each time you run your program, so you can reproduce your results.
    7. You can override this by seeding the generator with a genuine random number from linux /dev/random.
  36. 2.8 and 2.9 p70 Fine points.
    Skip. If I talk fast enough, we can always do this at the end of the semester, but it's not likely.