ECSE-2500, Engineering Probability, Spring 2010, Rensselaer Polytechnic Institute

*These pages look better in Firefox than in Explorer.*

# Lecture 5

- 2.4.1, p52. Bayes' rule
- B
_{j}partition S. That means that- If {$i\ne j$} then {$ B_i\cap B_j=\emptyset $} and
- {$ \bigcup_i B_i = S $}

- {$$ P[B_j|A] = \frac{B_j\cap A}{P[A]} = \frac{P[A|B_j] P[B_j]}{\sum_k P[A|B_k] P[B_k]} $$}
- application:
- We have a priori probs {$ P[B_j] $}
- Event A occurs. Knowing that A has happened gives us info that changes the probs.
- Compute a posteriori probs {$ P[B_j|A] $}

- B
- Example 2.29 comm channel: If receiver sees 1, which input was more probable? (You hope the answer is 1.)
- Example 2.30 chip quality control: For example 2.28, how long do we have to burn in chips so that the survivors have a 99% probability of being good? p=0.1, a=1/20000.
- Example:
False positives in a medical test
- T = test for disease was positive; T' = .. negative
- D = you have disease; D' = .. don't ..
- P[T|D] = .99, P[T'|D'] = .95, P[D] = 0.001
- P[D'|T] (false positive) = 0.98 !!!

- Example: Pick a cookie, from same page.
- 2.5 Independent events
- {$ P[A\cap B] = P[A] P[B] $}
- P[A|B] = P[A], P[B|A] = P[B]

- Example 2.31 4 balls {1b,2b,3w,4w}
- event A: black
- B: even
- C: >2

- A,B independent means that knowing A doesn't help you with B.
- Mutually exclusive events w.p.>0 must be dependent.
- Example 2.32 Points in square
- A: x>1/2. B: y>1/2. C: x>y

- When are 3 events independent?
- Each pair is independent.
- Also need {$ P[A\cap B\cap C] = P[A] P[B] P[C] $}

- Example 2.33. Last condition above is required.
- N events are independent iff the occurrance of no combo of the events affects another event.
- Common application: independence of experiments in a sequence.
- Example 2.34: coin tosses are assumed to be independent of each other. P[HHT] = P[1st coin is H] P[2nd is H] P[3rd is T].
- Example 2.35 System reliability
- Controller and 3 peripherals.
- System is up iff controller and at least 2 peripherals are up.
- Add a 2nd controller.

- 2.6 p59 Sequential experiments:
*maybe*independent - 2.6.1 Sequences of independent experiments
- Example 2.36

- 2.6.2 Binomial probability
*Bernoulli trial*flip a possibly unfair coin once.*p*is probability of head.- (Bernoulli did stats, econ, physics, ... in 18th century.)

- Example 2.37
- P[TTH] = {$ (1-p)^2 p $}
- P[1 head] = {$ 3 (1-p)^2 p $}

- Prob of exactly k successes = {$$ p_n(k) = {n \choose k} p^k (1-p)^{n-k} $$}
- {$ \sum_{k=0}^n p_n(k) = 1 $}
- Example 2.38
- Can avoid computing n! by computing {$ p_n(k) $} recursively, or by using approximation. Also, in C++, using double instead of float helps. (Almost always you should use double instead of float. It's the same speed.)
- Example 2.39
- Example 2.40 Error correction coding
- Multinomial probability law
- There are M different possible outcomes from an experiment, e.g., faces of a die showing.
- Prob of particular outcome: {$p_i$}
- Now run the experiment n times.
- Prob that i-th outcome occurred {$k_i$} times, {$ \sum_{i=1}^M k_i = n $} {$$ P[(k_1,k_2,...,k_M)] = \frac{n!}{k_1! k_2! ... k_M!} p_1^{k_1} p_2^{k_2}...p_M^{k_M} $$}

- Example 2.41 dartboard.
- Example 2.42 random phone numbers.
- 2.6.4 p63 Geometric probability law
- Repeat Bernoulli experiment until 1st success.
- Define outcome to be # trials until that happens.
- Define q=(1-p).
- {$ p(m) = (1-p)^{m-1}p = q^{m-1}p $} (
*p*has 2 different uses here). - {$ \sum_{m=1}^\infty p(m) =1$}
- Prob that more than K trials are required = {$q^K$}.

- Example: probability that more than 10 tosses of a die are required to get a 6 = {$ \left(\frac{5}{6}\right)^{10} = 0.16 $}
- Example 2.43: error control by retransmission
- 2.6.5 p64 Sequences, chains, of
*dependent*experiments - Example 2.44
- 2.7 Computer generation of random numbers
- Skip this section, except for following points.
- Executive summary: it's surprisingly hard to generate good random numbers. Commercial SW has been known to get this wrong. By now, they've gotten it right (I hope), so just call a subroutine.
- Arizona lottery got it wrong in 1998.
- Even random electronic noise is hard to use properly. The best selling 1955 book A Million Random Digits with 100,000 Normal Deviates had trouble generating random numbers this way. Asymmetries crept into their circuits perhaps because of component drift. For a laugh, read the reviews.
- Pseudo-random number generator: The subroutine returns numbers according to some algorithm (e.g., it doesn't use cosmic rays), but for your purposes, they're random.
- Computer random number routines usually return the same sequence of number each time you run your program, so you can reproduce your results.
- You can override this by
*seeding*the generator with a genuine random number from linux /dev/random.

- 2.8 and 2.9 p70 Fine points. Skip. If I talk fast enough, we can always do this at the end of the semester, but it's not likely.