# Engineering Probability Class 25 Thu 2018-04-19

1. I'll try to upload a guaranteed minimum grade by the end of tomorrow. That will assume that all the grades that I don't yet have are zero.
2. There will be eleven homeworks.

## 3   Iclicker questions

1. What is $$\int_{-\infty}^\infty e^{\big(-\frac{x^2}{2}\big)} dx$$?
1. 1/2
2. 1
3. $2\pi$
4. $\sqrt{2\pi}$
5. $1/\sqrt{2\pi}$
2. What is the largest possible value for a correlation coefficient?
1. 1/2
2. 1
3. $2\pi$
4. $\sqrt{2\pi}$
5. $1/\sqrt{2\pi}$
3. The most reasonable probability distribution for the number of defects on an integrated circuit caused by dust particles, cosmic rays, etc, is
1. Exponential
2. Poisson
3. Normal
4. Uniform
5. Binomial
4. The most reasonable probability distribution for the time until the next request hits your web server is:
1. Exponential
2. Poisson
3. Normal
4. Uniform
5. Binomial
5. If you add two independent normal random variables, each with variance 10, what is the variance of the sum?
1. 1
2. $\sqrt2$
3. 10
4. $10\sqrt2$
5. 20

## 4   Material from text

### 4.1   6.1.2 Joint Distribution Functions, ctd.

1. joint cumulative distribution function, p 305.
2. marginal cdf’s
3. joint probability mass function
4. conditional pmf’s
5. jointly continuous random variables
6. joint probability density function.
7. marginal pdf’s
8. conditional pdf’s
9. Example 6.7 Multiplicative Sequence, p 308.

### 4.2   6.1.3 Independence

1. Example 6.8 Independence.

### 4.3   6.2 Functions of several random variables

#### 4.3.1   6.2.1 One Function of Several Random Variables

1. Example 6.9 Maximum and Minimum of n Random Variables

Apply this to uniform r.v.

2. Example 6.11 Reliability of Redundant Systems

Reminder for exponential r.v.:

1. $f(x) = \lambda e^{-\lambda x}$
2. $F(x) = 1-e^{-\lambda x}$
3. $\mu = 1/\lambda$

I may extend this example to find pdf and mean.

#### 4.3.3   6.2.3 pdf of General Transformations

We skip Section 6.2.3. However, a historical note about Student's T distribution:

Student was a pseudonymn of a mathematician working for Guinness in Ireland. He developed several statistical techniques to sample beer to assure its quality. Guinness didn't let him publish under his real name because these were trade secrets.

### 4.4   6.3 Expected values of vector random variables

1. Section 6.3, page 316, extends the covariance to a matrix. Even with N variables, note that we're comparing only pairs of variables. If there were a complicated 3 variable dependency, which could happen (and did in a much earlier example), all the pairwise covariances would be 0.
2. Note the sequence.
1. First, the correlation matrix has the expectations of the products.
2. Then the covariance matrix corrects for the means not being 0.
3. Finally the correlation coefficents (not shown here) correct for the variances not being 1.