Engineering Probability Class 27 Thurs 2018-04-26
Table of contents
1 Iclicker questions
-
Experiment: toss two fair coins, one after the other. Observe two random variables:
- X is the number of heads.
- Y is the when the first head occurred, with 0 meaning both coins were tails.
What is P[X=1]?
- 0
- 1/4
- 1/2
- 3/4
- 1
-
What is P[Y=1]?
- 0
- 1/4
- 1/2
- 3/4
- 1
-
What is P[Y=1 & X=1]?
- 0
- 1/4
- 1/2
- 3/4
- 1
-
What is P[Y=1|X=1]?
- 0
- 1/4
- 1/2
- 3/4
- 1
-
What is P[X=1|Y=1]?
- 0
- 1/4
- 1/2
- 3/4
- 1
-
What's the MAP estimator for X given Y=2?
- 0
- 1
2 Material from text
2.1 Central limit theorem etc
- Review: Almost no matter what distribution the random variable X is, $F_{M_n}$ quickly becomes Gaussian as n increases. n=5 already gives a good approximation.
- nice applets:
- http://onlinestatbook.com/stat_sim/normal_approx/index.html This tests how good is the normal approximation to the binomial distribution.
- http://onlinestatbook.com/stat_sim/sampling_dist/index.html This lets you define a distribution, and take repeated samples of a given size. It shows how the means of the samples are distributed. For sample with more than a few observations, they look fairly normal.
- http://www.umd.umich.edu/casl/socsci/econ/StudyAids/JavaStat/CentralLimitTheorem.html This might also be interesting.
- Sample problems.
- Problem 7.1 on page 402.
- Problem 7.22.
- Problem 7.25.
2.2 Chapter 7, p 359, Sums of Random Variables
The long term goal of this section is to summarize information from a large group of random variables. E.g., the mean is one way. We will start with that, and go farther.
The next step is to infer the true mean of a large set of variables from a small sample.
2.3 Sums of random variables ctd
- Let Z=X+Y.
- $f_Z$ is convolution of $f_X$ and $f_Y$: $$f_Z(z) = (f_X * f_Y)(z)$$ $$f_Z(z) = \int f_X(x) f_Y(z-x) dx$$
- Characteristic functions are useful. $$\Phi_X(\omega) = E[e^{j\omega X} ]$$
- $\Phi_Z = \Phi_X \Phi_Y$.
- This extends to the sum of n random variables: if $Z=\sum_i X_i$ then $\Phi_Z (\omega) = \Pi_i \Phi_{X_i} (\omega)$
- E.g. Exponential with $\lambda=1$: $\Phi_1(\omega) = 1/(1-j\omega)$ (page 164).
- Sum of m exponentials has $\Phi(\omega)= 1/{(1-j\omega)}^m$. That's called an m-Erlang.
- Example 2: sum of n iid Bernoullis. Probability generating function is more useful for discrete random variables.
- Example 3: sum of n iid Gaussians. $$\Phi_{X_1} = e^{j\mu\omega - \frac{1}{2} \sigma^2 \omega^2}$$ $$\Phi_{Z} = e^{jn\mu\omega - \frac{1}{2}n \sigma^2 \omega^2}$$ I.e., mean and variance sum.
- As the number increases, no matter what distribution the initial random variance is (provided that its moments are finite), for the sum $\Phi$ starts looking like a Gaussian.
- The mean $M_n$ of n random variables is itself a random variable.
- As $n\rightarrow\infty$ $M_n \rightarrow \mu$.
- That's a law of large numbers (LLN).
- $E[ M_n ] = \mu$. It's an unbiased estimator.
- $VAR[ M_n ] = n \sigma ^2$
- Weak law of large numbers $$\forall \epsilon >0 \lim_{n\rightarrow\infty} P[ |M_n-\mu| < \epsilon] = 1$$
- How fast does it happen? We can use Chebyshev, though that is very conservative.
- Strong law of large numbers $$P [ \lim _ {n\rightarrow\infty} M_n = \mu ] =1$$
- As $n\rightarrow\infty$, $F_{M_n}$ becomes Gaussian. That's the Central Limit Theorem (CLT).
2.4 Chapter 8, Statistics
-
We have a population. (E.g., voters in next election, who will vote Democrat or Republican).
-
We don't know the population mean. (E.g., fraction of voters who will vote Democrat).
-
We take several samples (observations). From them we want to estimate the population mean and standard deviation. (Ask 1000 potential voters; 520 say they will vote Democrat. Sample mean is .52)
-
We want error bounds on our estimates. (.52 plus or minus .04, 95 times out of 100)
-
Another application: testing whether 2 populations have the same mean. (Is this batch of Guiness as good as the last one?)
-
Observations cost money, so we want to do as few as possible.
-
This gets beyond this course, but the biggest problems may be non-math ones. E.g., how do you pick a random likely voter? In the past phone books were used. In a famous 1936 Presidential poll, that biased against poor people, who voted for Roosevelt.
-
In probability, we know the parameters (e.g., mean and standard deviation) of a distribution and use them to compute the probability of some event.
E.g., if we toss a fair coin 4 times what's the probability of exactly 4 heads? Answer: 1/16.
-
In statistics we do not know all the parameters, though we usually know that type the distribution is, e.g., normal. (We often know the standard deviation.)
- We make observations about some members of the distribution, i.e., draw some samples.
- From them we estimate the unknown parameters.
- We often also compute a confidence interval on that estimate.
- E.g., we toss an unknown coin 100 times and see 60 heads. A good estimate for the probability of that coin coming up heads is 0.6.
-
Some estimators are better than others, though that gets beyond this course.
- Suppose I want to estimate the average height of an RPI student by measuring the heights of N random students.
- The mean of the highest and lowest heights of my N students would converge to the population mean as N increased.
- However the median of my sample would converge faster. Technically, the variance of the sample median is smaller than the variance of the sample hi-lo mean.
- The mean of my whole sample would converge the fastest. Technically, the variance of the sample mean is smaller than the variance of any other estimator of the population mean. That's why we use it.
- However perhaps the population's distribution is not normal. Then one of the other estimators might be better. It would be more robust.
-
(Enrichment) How to tell if the population is normal? We can do various plots of the observations and look. We can compute the probability that the observations would be this uneven if the population were normal.
-
An estimator may be biased. We have an distribution that is U[0,b] for unknown b. We take a sample. The max of the sample has a mean n/(n+1)b though it converges to b as n increases.
-
Example 8.2, page 413: One-tailed probability. This is the probability that the mean of our sample is at least so far above the population mean. $$\alpha = P[\overline{X_n}-\mu > c] = Q\left( \frac{c}{\sigma_x / \sqrt{n} } \right)$$ Q is defined on page 169: $$Q(x) = \int_x^ { \infty} \frac{1}{\sqrt{2\pi} } e^{-\frac{x^2}{2} } dx$$
-
Application: You sample n=100 students' verbal SAT scores, and see $ \overline{X} = 550$. You know that $\sigma=100$. If $\mu = 525$, what is the probability that $\overline{X_n} > 550$ ?
Answer: Q(2.5) = 0.006
-
This means that if we take 1000 random sample of students, each with 100 students, and measure each sample's mean, then, on average, 6 of those 1000 samples will have a mean over 550.
-
This is often worded as the probability of the population's mean being under 525 is 0.006, which is different. The problem with saying that is that presumes some probability distribution for the population mean.
-
The formula also works for the other tail, computing the probability that our sample mean is at least so far below the population mean.
-
The 2-tail probability is the probability that our sample mean is at least this far away from the sample mean in either direction. It is twice the 1-tail probability.
-
All this also works when you know the probability and want to know c, the cutoff.