Engineering Probability Class 19 Mon 2019-03-25

1   No new homework today

Enjoy GM week. The next homework will be posted Thurs, due next Thurs.

3   Section 5.7 Conditional probability ctd

  1. Example 5.35 Maximum A Posteriori Receiver on page 268.
  2. Example 5.37, page 270.
  3. Remember equations 5.49 a,b for total probability on page 269-70 for conditional expectation of Y given X.

4   Section 5.8 page 271: Functions of two random variables, ctd

  1. Example 5.39 Sum of Two Random Variables, page 271.

  2. Example 5.40 Sum of Nonindependent Gaussian Random Variables, page 272.

    I'll do an easier case of independent N(0,1) r.v. The sum will be N(0, $\sqrt{2}$ ).

  3. Example 5.44, page 275. Tranform two independent Gaussian r.v from

    (X,Y) to (R, $\theta$).

5   Section 5.9, page 278: pairs of jointly Gaussian r.v.

  1. I will simplify formula 5.61a by assuming that $\mu=0, \sigma=1$.

    $$f_{XY}(x,y)= \frac{1}{2\pi \sqrt{1-\rho^2}} e^{ \frac{-\left( x^2-2\rho x y + y^2\right)}{2(1-\rho^2)} } $$ .

  2. The r.v. are probably dependent. $\rho$} says how much.

  3. The formula degenerates if $|\rho|=1$ since the numerator and denominator are both zero. However the pdf is still valid. You could make the formula valid with l'Hopital's rule.

  4. The lines of equal probability density are ellipses.

  5. The marginal pdf is a 1 variable Gaussian.

  6. Example 5.47, page 282: Estimation of signal in noise

    1. This is our perennial example of signal and noise. However, here the signal is not just $\pm1$ but is normal. Our job is to find the ''most likely'' input signal for a given output.
  7. Important concept in the noisy channel example (with X and N both being Gaussian): The most likely value of X given Y is not Y but is somewhat smaller, depending on the relative sizes of \(\sigma_X\) and \(\sigma_N\). This is true in spite of \(\mu_N=0\). It would be really useful for you to understand this intuitively. Here's one way:

    If you don't know Y, then the most likely value of X is 0. Knowing Y gives you more information, which you combine with your initial info (that X is \(N(0,\sigma_X)\) to get a new estimate for the most likely X. The smaller the noise, the more valuable is Y. If the noise is very small, then the mostly likely X is close to Y. If the noise is very large (on average) then the most likely X is still close to 0.

6   Tutorial on probability density - 2 variables

In class 15, I tried to motivate the effect of changing one variable on probability density. Here's a try at motivating changing 2 variables.

  1. We're throwing darts uniformly at a one foot square dartboard.
  2. We observe 2 random variables, X, Y, where the dart hits (in Cartesian coordinates).
  3. $$f_{X,Y}(x,y) = \begin{cases} 1& \text{if}\,\, 0\le x\le1 \cap 0\le y\le1\\ 0&\text{otherwise} \end{cases}$$
  4. $$P[.5\le x\le .6 \cap .8\le y\le.9] = \int_{.5}^{.6}\int_{.8}^{.9} f_{XY}(x,y) dx \, dy = 0.01 $$
  5. Transform to centimeters: $$\begin{bmatrix}V\\W\end{bmatrix} = \begin{pmatrix}30&0\\0&30\end{pmatrix} \begin{bmatrix}X\\Y\end{bmatrix}$$
  6. $$f_{V,W}(v,w) = \begin{cases} 1/900& \text{if } 0\le v\le30 \cap 0\le w\le30\\ 0&\text{otherwise} \end{cases}$$
  7. $$P[15\le v\le 18 \cap 24\le w\le27] = \\ \int_{15}^{18}\int_{24}^{27} f_{VW}(v,w)\, dv\, dw = \frac{ (18-15)(27-24) }{900} = 0.01$$
  8. See Section 5.8.3 on page 286.
  9. Next time: We've seen 1 r.v., we've seen 2 r.v. Now we'll see several r.v.