#Prob ECSE-2500-01 Engineering Probability, Spring 2018, Rensselaer Polytechnic Institutehttps://wrf.ecse.rpi.edu/Teaching/probability-s2018/course web siteenContents © 2019 <a href="mailto:frankwr@rpi.edu">W Randolph Franklin (WRF), RPI</a> Thu, 17 Jan 2019 19:14:28 GMTNikola (getnikola.com)http://blogs.law.harvard.edu/tech/rss- Engineering Probability Class 29 Mon 2018-05-10https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class29/W Randolph Franklin (WRF), RPI<div><div class="contents topic" id="table-of-contents">
<p class="topic-title first">Table of contents</p>
<ul class="auto-toc simple">
<li><a class="reference internal" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class29/#final-grading-notes" id="id1">1 Final grading notes</a></li>
<li><a class="reference internal" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class29/#closing-remarks" id="id2">2 Closing remarks</a></li>
</ul>
</div>
<!-- -->
<style> .red {color:red} </style>
<style> .blue {color:blue} </style><p>This is not an actual class, but a place to present info about the grading.</p>
<div class="section" id="final-grading-notes">
<h2><a class="toc-backref" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class29/#id1">1 Final grading notes</a></h2>
<p>I made some grade formula changes to so that your final total would not go down.</p>
<ol class="arabic simple">
<li>Make full points for piazza continue to be 5, not 6. So, some students could go over full points here, but allowing that seemed better than clipping it at 5.</li>
<li>Make full points for iclickers be 9 although there were 14 iclicker days.</li>
<li>Something went wrong on the last iclicker day, so everyone got a point, although half the class was absent.</li>
<li>Use these changes and exam 3 (main and conflict) to compute a total grade 509 (altho it's computed on 510).</li>
<li>BTW some students did increase their grades by writing exam 3.</li>
<li>I'd uploaded earlier total grades on 423, 501, and 507.</li>
<li>Make the new total total510=max(total423, total501, total507, total509).</li>
<li>Use the grade cutoffs in the syllabus.</li>
<li>This gives a course GPA=3.3. That's not so bad for a 2000-level course.</li>
<li>I uploaded total510, grade510, and exam3normalized to LMS.</li>
</ol>
</div>
<div class="section" id="closing-remarks">
<h2><a class="toc-backref" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class29/#id2">2 Closing remarks</a></h2>
<ol class="arabic simple">
<li>I enjoyed teaching this, and hope you learned some fun and useful stuff.</li>
<li>I'm available in the future to discuss and advise any legal ethical topics, such as career advice or ideas about problems you may have.</li>
</ol>
</div></div>mathjaxhttps://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class29/Thu, 10 May 2018 04:00:00 GMT
- Engineering Probability Exam 3 - Tues 2018-05-08https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/exam3/W Randolph Franklin (WRF), RPI<div><p>Name, RCSID:</p>
<pre class="literal-block">
.
.
</pre>
<p>Rules:</p>
<ol class="lowerroman simple">
<li>You have 80 minutes.</li>
<li>You may bring three 2-sided 8.5"x11" papers with notes.</li>
<li>You may bring a calculator.</li>
<li>You may not share material with each other during the exam.</li>
<li>No collaboration or communication (except with the staff) is allowed.</li>
<li>Check that your copy of this test has all nine pages.</li>
<li>Each part of a question is worth 5 points.</li>
<li>You may cross out three question parts, which will not be graded.</li>
<li>When answering a question, don't just state your answer, prove it.</li>
</ol>
<p>Questions:</p>
<ol class="arabic">
<li><p class="first">You toss two coins. Each comes up heads half of the time. However, for some funny reason, they both come up heads together, or both come up tails together. Intuitively, they not independent. This question is to prove that from the definition of independence.</p>
<pre class="literal-block">
.
.
</pre>
</li>
<li><p class="first">This time, you toss three coins, A, B, and C. These are the probabilities:</p>
<p>P[TTT] = P[THH] = P[HTH] = P[HHT] = 0</p>
<p>P[TTH] = P[THT] = P[HTT] = P[HHH] = 1/4</p>
<p>My notation is that TTH means that coin A is tails, coin B tails, and coin C heads. Etc.</p>
<ol class="loweralpha">
<li><p class="first">Are the individual coins fair (i.e., heads half the time)?</p>
<pre class="literal-block">
.
.
</pre>
</li>
<li><p class="first">Are coins A and B independent?</p>
<pre class="literal-block">
.
.
</pre>
</li>
<li><p class="first">Are all 3 coins independent?</p>
<pre class="literal-block">
.
.
</pre>
</li>
</ol>
</li>
<li><p class="first">This question is about a continuous probability distribution on 2 variables.</p>
<p>$$f_{XY}(x,y) = \begin{cases} c x y & \text{ if } (0\le x) \ \& \ (0\le y)\ \& \ (0\le x+y \le 1) \\ 0 & \text{ otherwise}\end{cases}$$</p>
<p>The nonzero region is the triangle with vertices (0,0), (1,0) and (0,1).</p>
<p><em>c</em> is some constant, but I didn't tell you what it is.</p>
<ol class="loweralpha">
<li><p class="first">What is <em>c</em>?</p>
<pre class="literal-block">
.
.
</pre>
</li>
<li><p class="first">What is $F_{XY}(x,y)$?</p>
<pre class="literal-block">
.
.
</pre>
</li>
<li><p class="first">What is $f_X(x)$?</p>
<pre class="literal-block">
.
.
</pre>
</li>
<li><p class="first">Are X and Y independent?</p>
<pre class="literal-block">
.
.
</pre>
</li>
<li><p class="first">What is $P[X\le Y]$ ?</p>
<pre class="literal-block">
.
.
</pre>
</li>
<li><p class="first">Define a new random variable $Z=X+Y$. What is $F_Z(z)$?</p>
<pre class="literal-block">
.
.
</pre>
</li>
<li><p class="first">What is $E[X]$?</p>
<pre class="literal-block">
.
.
</pre>
</li>
<li><p class="first">What is $COV[X,Y]$?</p>
<pre class="literal-block">
.
.
</pre>
</li>
<li><p class="first">What is $\rho_{X,Y}$?</p>
<pre class="literal-block">
.
.
</pre>
</li>
<li><p class="first">What is $f_Y(y|x)$?</p>
<pre class="literal-block">
.
.
</pre>
</li>
<li><p class="first">What is $E[Y|x]$?</p>
<pre class="literal-block">
.
.
</pre>
</li>
</ol>
</li>
<li><p class="first">Compute $$\int_0^\infty e^{-x^2} dx$$ .</p>
<pre class="literal-block">
.
.
</pre>
</li>
<li><p class="first">What is the legal range for a correlation coefficient?</p>
<pre class="literal-block">
.
.
</pre>
</li>
<li><p class="first">What is the variance of the sum of 100 independent variables, each of which is N(0,1)?</p>
<pre class="literal-block">
.
.
</pre>
</li>
<li><p class="first">You have 10 independent random variables. Each is uniform on [0,1]. What is the expected value of the max?</p>
<pre class="literal-block">
.
.
</pre>
</li>
<li><p class="first">You toss 10 independent fair coins, one after the other.</p>
<ol class="loweralpha">
<li><p class="first">What is the expected total number of heads, given that the first 5 coins came up heads?</p>
<pre class="literal-block">
.
.
</pre>
</li>
<li><p class="first">What is the probability of the total number of heads being 10, given that the first 5 coins came up heads?</p>
<pre class="literal-block">
.
.
</pre>
</li>
</ol>
</li>
</ol>
<p><em>End of exam 3, total 90 points (considering that 3 questions aren't graded).</em></p></div>mathjaxhttps://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/exam3/Mon, 07 May 2018 04:00:00 GMT
- Engineering Probability Exam 3 solution - Tues 2018-05-08https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/exam3-sol/W Randolph Franklin (WRF), RPI<div><style> .red {color:red} </style>
<style> .blue {color:blue} </style><p>Name, RCSID: <span class="red">W. Randolph Franklin, frankwr</span></p>
<p><span class="red">OK to give the formulas w/o working them out.</span></p>
<p>Rules:</p>
<ol class="lowerroman simple">
<li>You have 80 minutes.</li>
<li>You may bring three 2-sided 8.5"x11" papers with notes.</li>
<li>You may bring a calculator.</li>
<li>You may not share material with each other during the exam.</li>
<li>No collaboration or communication (except with the staff) is allowed.</li>
<li>Check that your copy of this test has all nine pages.</li>
<li>Each part of a question is worth 5 points.</li>
<li>You may cross out three question parts, which will not be graded.</li>
<li>When answering a question, don't just state your answer, prove it.</li>
</ol>
<p>Questions:</p>
<ol class="arabic">
<li><p class="first">You toss two coins. Each comes up heads half of the time. However, for some funny reason, they both come up heads together, or both come up tails together. Intuitively, they not independent. This question is to prove that from the definition of independence.</p>
<p><span class="red">P[HT] = 0. However P[A=H] = P[B=T] = 1/2, so P[HT] != P[A=H]P[B=T]. That's the def of not independent.</span></p>
</li>
<li><p class="first">This time, you toss three coins, A, B, and C. These are the probabilities:</p>
<p>P[TTT] = P[THH] = P[HTH] = P[HHT] = 0</p>
<p>P[TTH] = P[THT] = P[HTT] = P[HHH] = 1/4</p>
<p>My notation is that TTH means that coin A is tails, coin B tails, and coin C heads. Etc.</p>
<ol class="loweralpha">
<li><p class="first">Are the individual coins fair (i.e., heads half the time)?</p>
<p><span class="red">P[A=H] = 0+0+1/4+1/4 = 1/2 so fair. Ditto B and C.</span></p>
</li>
<li><p class="first">Are coins A and B independent?</p>
<p><span class="red">P[A=H,B=H] = 1/4, good. P[A=H,B=T]=1/4, good. P[TH] = 1/4, good. P[TT] = 1/4, good. Yes independent.</span></p>
</li>
<li><p class="first">Are all 3 coins independent?</p>
<p><span class="red">P[HHH] = 1/4 != P[A=H]P[B=H]P[C=H] = 1/8. Not independent.</span></p>
</li>
</ol>
</li>
<li><p class="first">This question is about a continuous probability distribution on 2 variables.</p>
<p>$$f_{XY}(x,y) = \begin{cases} c x y & \text{ if } (0\le x) \ \& \ (0\le y)\ \& \ (0\le x+y \le 1) \\ 0 & \text{ otherwise}\end{cases}$$</p>
<p>The nonzero region is the triangle with vertices (0,0), (1,0) and (0,1).</p>
<p><em>c</em> is some constant, but I didn't tell you what it is.</p>
<ol class="loweralpha">
<li><p class="first">What is <em>c</em>?</p>
<p><span class="red">$\int_0^1\int_0^{1-x} xy\ dy\ dx = 1/24$ so $c=24$</span></p>
</li>
<li><p class="first">What is $F_{XY}(x,y)$?</p>
<p><span class="red">$$F_{XY}(x,y)=\begin{cases} 0 & \text{ if } x\le0 \cup y\le0 \\ 1 & \text{ if } x\ge 1 \cap y\ge1 \\ 6x^2y^2 & \text{ if } 0\le x \cap 0\le y \cap x+y\le1 \\ (\int_0^x\int_0^{1-x} + \int_0^{1-y}\int_{1-x}^y + \int_{1-y}^x\int_{1-x}^{1-x_0}) (12x_0y_0 dy_0dx_0) & \text{ otherwise}\end{cases}$$</span></p>
<p><span class="red">The last case above splits the nonzero integration region into two rectangles and a triangle.</span></p>
<p><span class="red">It's also acceptable to draw a figure and say something intelligent w/o being explicit about all the details.</span></p>
</li>
<li><p class="first">What is $f_X(x)$?</p>
<p><span class="red">$f_X(x)= \int_0^{1-x}f_{XY}(x,y) dy = 12x(1-x)^2$</span></p>
<p><span class="red">Note that $\int_0^1 f_X(x)=1$, which is correct.</span></p>
</li>
<li><p class="first">Are X and Y independent?</p>
<p><span class="red">$f_X(x)=12x(1-x)^2,f_Y(y)=12y(1-y)^2,f_X(x)f_Y(y)\ne f_{XY}(x,y)$</span></p>
<p><span class="red">No.</span></p>
</li>
<li><p class="first">What is $P[X\le Y]$ ?</p>
<p><span class="red">$\int_0^1\int_0^{\min(x,1-x)} f_{XY}(x,y) dy\ dx$. However, since $f_{XY}(x,y) = f_{XY}(y,x)$, $P[X\le Y]=1/2$</span></p>
</li>
<li><p class="first">Define a new random variable $Z=X+Y$. What is $F_Z(z)$?</p>
<p><span class="red">$f_Z(z) = \int_0^z f_{XY}(x,z-x) dx = 24\int_0^z x(z-x)dx$ for $0\le z\le 1$</span></p>
<p><span class="red">$F_Z(z) = \int f_Z(z)dz$</span></p>
</li>
<li><p class="first">What is $E[X]$?</p>
<p><span class="red">$\int_0^1 xf_X(x)dx = 2/5$</span></p>
</li>
<li><p class="first">What is $COV[X,Y]$?</p>
<p><span class="red">E[XY] = $\int_0^1\int_0^{1-x}xyf_{XY}dx dy=8\int_0^1x^2(1-x)^4 dx, E[X]=E[Y]=2/5$, COV[X,Y]=E[XY]-E[X]E[Y]. You don't have to work through the math.</span></p>
</li>
<li><p class="first">What is $\rho_{X,Y}$?</p>
<p><span class="red">$\sigma_X=\sigma_Y= E[X^2]-E[X]^2$</span></p>
<p><span class="red">$\rho_{X,Y}=COV[X,Y]/(\sigma_X\sigma_Y)$</span></p>
</li>
<li><p class="first">What is $f_Y(y|x)$?</p>
<p><span class="red">$f_Y(y|x)=f(x,y)/f(x) = 12xy/(4x^3) = 2\frac{y}{x^2}$</span></p>
</li>
<li><p class="first">What is $E[Y|x]$?</p>
<p><span class="red">Integrate the above over $y$ to get $x^{-2}$</span></p>
</li>
</ol>
</li>
<li><p class="first">Compute $$\int_0^\infty e^{-x^2} dx$$ .</p>
<p><span class="red">Consider $\sigma=1/\sqrt{2}$. Working a little, this will give $\int_0^\infty e^{-x^2} dx=\sqrt{\pi}/2=0.886$. It was also ok just to use a calculator.</span></p>
</li>
<li><p class="first">What is the legal range for a correlation coefficient?</p>
<p><span class="red">-1 to 1</span></p>
</li>
<li><p class="first">What is the variance of the sum of 100 independent variables, each of which is N(0,1)?</p>
<p><span class="red">100.</span></p>
</li>
<li><p class="first">You have 10 independent random variables. Each is uniform on [0,1]. What is the expected value of the max?</p>
<p><span class="red">Let $W=\max(X_i). F_W(w)=w^{10}. f_W(w)=10w^9.E[W]=10/11.$</span></p>
</li>
<li><p class="first">You toss 10 independent fair coins, one after the other.</p>
<ol class="loweralpha">
<li><p class="first">What is the expected total number of heads, given that the first 5 coins came up heads?</p>
<p><span class="red">The last 5 coins do not depend on the first 5. So the expectation is 7.5.</span></p>
</li>
<li><p class="first">What is the probability of the total number of heads being 10, given that the first 5 coins came up heads?</p>
<p><span class="red">1/32</span></p>
</li>
</ol>
</li>
</ol>
<p><em>End of exam 3, total 90 points (considering that 3 questions aren't graded).</em></p></div>mathjaxhttps://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/exam3-sol/Mon, 07 May 2018 04:00:00 GMT
- Engineering Probability Class 28 Mon 2018-04-30https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class28/W Randolph Franklin (WRF), RPI<div><div class="contents topic" id="table-of-contents">
<p class="topic-title first">Table of contents</p>
<ul class="auto-toc simple">
<li><a class="reference internal" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class28/#grades" id="id1">1 Grades</a></li>
<li><a class="reference internal" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class28/#material-from-text" id="id2">2 Material from text</a><ul class="auto-toc">
<li><a class="reference internal" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class28/#hypothesis-testing" id="id3">2.1 Hypothesis testing</a></li>
</ul>
</li>
<li><a class="reference internal" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class28/#iclicker-questions" id="id4">3 Iclicker questions</a></li>
<li><a class="reference internal" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class28/#counterintuitive-things-in-statistics" id="id5">4 Counterintuitive things in statistics</a></li>
<li><a class="reference internal" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class28/#relevant-xkcd-comics" id="id6">5 Relevant Xkcd comics</a></li>
</ul>
</div>
<!-- -->
<style> .red {color:red} </style>
<style> .blue {color:blue} </style><div class="section" id="grades">
<h2><a class="toc-backref" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class28/#id1">1 Grades</a></h2>
<ol class="arabic simple">
<li>I think I've responded to all grade emails. Please resend any that I overlooked.</li>
<li>Any that hasn't been complained about is resumed to be correct.</li>
<li>The conflict exam is Thurs May 10 at 3pm, in a room TBD. It is open only to students with conflicts who wrote me. If you're one of those students, but you don't plan to write it, then please tell me. E.g., a smaller room might then suffice.</li>
<li>We'll try to get updated guaranteed grades uploaded, so you can decide whether to write the final exam.</li>
</ol>
</div>
<div class="section" id="material-from-text">
<h2><a class="toc-backref" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class28/#id2">2 Material from text</a></h2>
<div class="section" id="hypothesis-testing">
<h3><a class="toc-backref" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class28/#id3">2.1 Hypothesis testing</a></h3>
<ol class="arabic simple">
<li>Say we want to test whether the average height of an RPI student (called the population) is 2m.</li>
<li>We assume that the distribution is Gaussian (normal) and that the standard deviation of heights is, say, 0.2m.</li>
<li>However we don't know the mean.</li>
<li>We do an experiment and measure the heights of n=100 random students. Their mean height is, say, 1.9m.</li>
<li>The question on the table is, is the population mean 2m?</li>
<li>This is different from the earlier question that we analyzed, which was this: What is the most likely population mean? (Answer: 1.9m.)</li>
<li>Now we have a hypothesis (that the population mean is 2m) that we're testing.</li>
<li>The standard way that this is handled is as follows.</li>
<li>Define a null hypothesis, called H0, that the population mean is 2m.</li>
<li>Define an alternate hypothesis, called HA, that the population mean is not 2m.</li>
<li>Note that we observed our sample mean to be $0.5 \sigma$ below the population mean, if H0 is true.</li>
<li>Each time we rerun the experiment (measure 100 students) we'll observe a different number.</li>
<li>We compute the probability that, if H0 is true, our sample mean would be this far from 2m.</li>
<li>Depending on what our underlying model of students is, we might use a 1-tail or a 2-tail probability.</li>
<li>Perhaps we think that the population mean might be less than 2m but it's not going to be more. Then a 1-tail distribution makes sense.</li>
<li>That is, our assumptions affect the results.</li>
<li>The probability is Q(5), which is very small.</li>
<li>Therefore we reject H0 and accept HA.</li>
<li>We make a type-1 error if we reject H0 and it was really true. See <a class="reference external" href="http://en.wikipedia.org/wiki/Type_I_and_type_II_errors">http://en.wikipedia.org/wiki/Type_I_and_type_II_errors</a></li>
<li>We make a type-2 error if we accept H0 and it was really false.</li>
<li>These two errors trade off: by reducing the probability of one we increase the probability of the other, for a given sample size.</li>
<li>E.g. in a criminal trial we prefer that a guilty person go free to having an innocent person convicted.</li>
<li>Rejecting H0 says nothing about what the population mean really is, just that it's not likely 2m.</li>
<li>Enrichment: Random sampling is hard. The US government got it wrong here: <a class="reference external" href="http://politics.slashdot.org/story/11/05/13/2249256/Algorithm-Glitch-Voids-Outcome-of-US-Green-Card-Lottery">http://politics.slashdot.org/story/11/05/13/2249256/Algorithm-Glitch-Voids-Outcome-of-US-Green-Card-Lottery</a></li>
<li>Example 8.1 page 412.</li>
<li>Example 8.21 page 442.</li>
<li>Example 8.23.</li>
</ol>
</div>
</div>
<div class="section" id="iclicker-questions">
<h2><a class="toc-backref" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class28/#id4">3 Iclicker questions</a></h2>
<ol class="arabic">
<li><p class="first">Suppose that RPI students' heights have mean 1.8m and standard deviation 0.2m. (These are fictitious numbers.)</p>
<p>You measure a sample of 16 students, and compute the sample mean $m$.</p>
<p>What is E[m]?</p>
<ol class="loweralpha simple">
<li>10</li>
<li>.2</li>
<li>.05</li>
<li>9.8</li>
<li>2.5</li>
</ol>
</li>
<li><p class="first">What is STD[m]?</p>
<ol class="loweralpha simple">
<li>10</li>
<li>.2</li>
<li>.05</li>
<li>9.8</li>
<li>2.5</li>
</ol>
</li>
</ol>
</div>
<div class="section" id="counterintuitive-things-in-statistics">
<h2><a class="toc-backref" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class28/#id5">4 Counterintuitive things in statistics</a></h2>
<p>Statistics has some surprising examples, which would appear to be impossible. Here are some.</p>
<ol class="arabic">
<li><p class="first">Average income can increase faster in a whole country than in any part of the country.</p>
<ol class="loweralpha simple">
<li>Consider a country with two parts: east and west.</li>
<li>Each part has 100 people.</li>
<li>Each person in the west makes \$100 per year; each person in the east \$200.</li>
<li>The total income in the west is \$10K, in the east \$20K, and in the whole country \$30K.</li>
<li>The average income in the west is \$100, in the east \$200, and in the whole country \$150.</li>
<li>Assume that next year nothing changes except that one westerner moves east and gets an average eastern job, so he now makes \$200 instead of \$100.</li>
<li>The west now has 99 people @ \$100; its average income didn't change.</li>
<li>The east now has 101 people @ \$200; its average income didn't change.</li>
<li>The whole country's income is \$30100 for an average of \$150.50; that went up.</li>
</ol>
</li>
<li><p class="first">College acceptance rate surprise.</p>
<ol class="loweralpha">
<li><p class="first">Imagine that we have two groups of people: Albanians and Bostonians.</p>
</li>
<li><p class="first">They're applying to two programs at the university: Engineering and Humanities.</p>
</li>
<li><p class="first">Here are the numbers. The fractions are accepted/applied.</p>
<table border="1" class="docutils">
<colgroup>
<col width="40%">
<col width="20%">
<col width="20%">
<col width="20%">
</colgroup>
<thead valign="bottom">
<tr><th class="head">city-major</th>
<th class="head">Engin</th>
<th class="head">Human</th>
<th class="head">Total</th>
</tr>
</thead>
<tbody valign="top">
<tr><td>Albanians</td>
<td>11/15</td>
<td>2/5</td>
<td>13/20</td>
</tr>
<tr><td>Bostonians</td>
<td>4/5</td>
<td>7/15</td>
<td>11/20</td>
</tr>
<tr><td>Total</td>
<td>15/20</td>
<td>9/20</td>
<td>24/40</td>
</tr>
</tbody>
</table>
<p>E.g, 15 Albanians applied to Engin; 11 were accepted.</p>
</li>
<li><p class="first">Note that in Engineering, a <em>smaller</em> fraction of Albanian applicants were accepted than Bostonian applicants. <em>(corrected)</em></p>
</li>
<li><p class="first">Ditto in Humanities.</p>
</li>
<li><p class="first">However in all, a <em>larger</em> fraction of Albanian applicants were accepted than Bostonian applicants.</p>
</li>
</ol>
</li>
<li><p class="first">I could go on.</p>
</li>
</ol>
</div>
<div class="section" id="relevant-xkcd-comics">
<h2><a class="toc-backref" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class28/#id6">5 Relevant Xkcd comics</a></h2>
<ol class="arabic simple">
<li><a class="reference external" href="https://xkcd.com/1985/">Meteorologist</a></li>
<li><a class="reference external" href="https://xkcd.com/882/">Significant</a></li>
<li><a class="reference external" href="https://xkcd.com/1478/">P-Values</a></li>
<li><a class="reference external" href="https://xkcd.com/552/">Correlation</a></li>
<li><a class="reference external" href="https://xkcd.com/1725/">Linear Regression</a></li>
<li><a class="reference external" href="https://xkcd.com/925/">Cell Phones</a></li>
<li><a class="reference external" href="https://xkcd.com/1132/">Frequentists vs. Bayesians</a></li>
<li><a class="reference external" href="https://xkcd.com/1236/">Seashell</a></li>
<li><a class="reference external" href="https://xkcd.com/795/">Conditional Risk</a></li>
<li><a class="reference external" href="https://xkcd.com/892/">Null Hypothesis</a></li>
</ol>
</div></div>mathjaxhttps://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class28/Sun, 29 Apr 2018 04:00:00 GMT
- Engineering Probability Class 27 Thurs 2018-04-26https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class27/W Randolph Franklin (WRF), RPI<div><div class="contents topic" id="table-of-contents">
<p class="topic-title first">Table of contents</p>
<ul class="auto-toc simple">
<li><a class="reference internal" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class27/#iclicker-questions" id="id1">1 Iclicker questions</a></li>
<li><a class="reference internal" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class27/#material-from-text" id="id2">2 Material from text</a><ul class="auto-toc">
<li><a class="reference internal" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class27/#central-limit-theorem-etc" id="id3">2.1 Central limit theorem etc</a></li>
<li><a class="reference internal" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class27/#chapter-7-p-359-sums-of-random-variables" id="id4">2.2 Chapter 7, p 359, Sums of Random Variables</a></li>
<li><a class="reference internal" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class27/#sums-of-random-variables-ctd" id="id5">2.3 Sums of random variables ctd</a></li>
<li><a class="reference internal" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class27/#chapter-8-statistics" id="id6">2.4 Chapter 8, Statistics</a></li>
</ul>
</li>
</ul>
</div>
<!-- -->
<style> .red {color:red} </style>
<style> .blue {color:blue} </style><div class="section" id="iclicker-questions">
<h2><a class="toc-backref" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class27/#id1">1 Iclicker questions</a></h2>
<ol class="arabic">
<li><p class="first">Experiment: toss two fair coins, one after the other. Observe two random variables:</p>
<ol class="loweralpha simple">
<li>X is the number of heads.</li>
<li>Y is the when the first head occurred, with 0 meaning both coins were tails.</li>
</ol>
<p>What is P[X=1]?</p>
<ol class="loweralpha simple">
<li>0</li>
<li>1/4</li>
<li>1/2</li>
<li>3/4</li>
<li>1</li>
</ol>
</li>
<li><p class="first">What is P[Y=1]?</p>
<ol class="loweralpha simple">
<li>0</li>
<li>1/4</li>
<li>1/2</li>
<li>3/4</li>
<li>1</li>
</ol>
</li>
<li><p class="first">What is P[Y=1 & X=1]?</p>
<ol class="loweralpha simple">
<li>0</li>
<li>1/4</li>
<li>1/2</li>
<li>3/4</li>
<li>1</li>
</ol>
</li>
<li><p class="first">What is P[Y=1|X=1]?</p>
<ol class="loweralpha simple">
<li>0</li>
<li>1/4</li>
<li>1/2</li>
<li>3/4</li>
<li>1</li>
</ol>
</li>
<li><p class="first">What is P[X=1|Y=1]?</p>
<ol class="loweralpha simple">
<li>0</li>
<li>1/4</li>
<li>1/2</li>
<li>3/4</li>
<li>1</li>
</ol>
</li>
<li><p class="first">What's the MAP estimator for X given Y=2?</p>
<ol class="loweralpha simple">
<li>0</li>
<li>1</li>
<li><ol class="first arabic" start="2">
<li>
</li></ol>
</li>
<li><ol class="first arabic" start="3">
<li>
</li></ol>
</li>
<li><ol class="first arabic" start="4">
<li>
</li></ol>
</li>
</ol>
</li>
</ol>
</div>
<div class="section" id="material-from-text">
<h2><a class="toc-backref" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class27/#id2">2 Material from text</a></h2>
<div class="section" id="central-limit-theorem-etc">
<h3><a class="toc-backref" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class27/#id3">2.1 Central limit theorem etc</a></h3>
<ol class="arabic simple">
<li>Review: Almost no matter what distribution the random variable X is, $F_{M_n}$ quickly becomes Gaussian as n increases. n=5 already gives a good approximation.</li>
<li>nice applets:<ol class="loweralpha">
<li><a class="reference external" href="http://onlinestatbook.com/stat_sim/normal_approx/index.html">http://onlinestatbook.com/stat_sim/normal_approx/index.html</a> This tests how good is the normal approximation to the binomial distribution.</li>
<li><a class="reference external" href="http://onlinestatbook.com/stat_sim/sampling_dist/index.html">http://onlinestatbook.com/stat_sim/sampling_dist/index.html</a> This lets you define a distribution, and take repeated samples of a given size. It shows how the means of the samples are distributed. For sample with more than a few observations, they look fairly normal.</li>
<li><a class="reference external" href="http://www.umd.umich.edu/casl/socsci/econ/StudyAids/JavaStat/CentralLimitTheorem.html">http://www.umd.umich.edu/casl/socsci/econ/StudyAids/JavaStat/CentralLimitTheorem.html</a> This might also be interesting.</li>
</ol>
</li>
<li>Sample problems.<ol class="loweralpha">
<li>Problem 7.1 on page 402.</li>
<li>Problem 7.22.</li>
<li>Problem 7.25.</li>
</ol>
</li>
</ol>
</div>
<div class="section" id="chapter-7-p-359-sums-of-random-variables">
<h3><a class="toc-backref" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class27/#id4">2.2 Chapter 7, p 359, Sums of Random Variables</a></h3>
<p>The long term goal of this section is to summarize information from a large
group of random variables. E.g., the mean is one way. We will start with
that, and go farther.</p>
<p>The next step is to infer the true mean of a large set of variables from a
small <strong>sample</strong>.</p>
</div>
<div class="section" id="sums-of-random-variables-ctd">
<h3><a class="toc-backref" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class27/#id5">2.3 Sums of random variables ctd</a></h3>
<ol class="arabic simple">
<li>Let Z=X+Y.</li>
<li>$f_Z$ is convolution of $f_X$ and $f_Y$: $$f_Z(z) = (f_X * f_Y)(z)$$ $$f_Z(z) = \int f_X(x) f_Y(z-x) dx$$</li>
<li>Characteristic functions are useful. $$\Phi_X(\omega) = E[e^{j\omega X} ]$$</li>
<li>$\Phi_Z = \Phi_X \Phi_Y$.</li>
<li>This extends to the sum of n random variables: if $Z=\sum_i X_i$ then $\Phi_Z (\omega) = \Pi_i \Phi_{X_i} (\omega)$</li>
<li>E.g. Exponential with $\lambda=1$: $\Phi_1(\omega) = 1/(1-j\omega)$ (page 164).</li>
<li>Sum of m exponentials has $\Phi(\omega)= 1/{(1-j\omega)}^m$. That's called an m-Erlang.</li>
<li>Example 2: sum of n iid Bernoullis. Probability generating function is more useful for discrete random variables.</li>
<li>Example 3: sum of n iid Gaussians. $$\Phi_{X_1} = e^{j\mu\omega - \frac{1}{2} \sigma^2 \omega^2}$$ $$\Phi_{Z} = e^{jn\mu\omega - \frac{1}{2}n \sigma^2 \omega^2}$$ I.e., mean and variance sum.</li>
<li>As the number increases, no matter what distribution the initial random variance is (provided that its moments are finite), for the sum $\Phi$ starts looking like a Gaussian.</li>
<li>The mean $M_n$ of n random variables is itself a random variable.</li>
<li>As $n\rightarrow\infty$ $M_n \rightarrow \mu$.</li>
<li>That's a <strong>law of large numbers</strong> (LLN).</li>
<li>$E[ M_n ] = \mu$. It's an <strong>unbiased estimator</strong>.</li>
<li>$VAR[ M_n ] = n \sigma ^2$</li>
<li><strong>Weak law of large numbers</strong> $$\forall \epsilon >0 \lim_{n\rightarrow\infty} P[ |M_n-\mu| < \epsilon] = 1$$</li>
<li>How fast does it happen? We can use Chebyshev, though that is very conservative.</li>
<li><strong>Strong law of large numbers</strong> $$P [ \lim _ {n\rightarrow\infty} M_n = \mu ] =1$$</li>
<li>As $n\rightarrow\infty$, $F_{M_n}$ becomes Gaussian. That's the <strong>Central Limit Theorem</strong> (CLT).</li>
</ol>
</div>
<div class="section" id="chapter-8-statistics">
<h3><a class="toc-backref" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class27/#id6">2.4 Chapter 8, Statistics</a></h3>
<ol class="arabic">
<li><p class="first">We have a population. <strong>(E.g., voters in next election, who will vote Democrat or Republican).</strong></p>
</li>
<li><p class="first">We don't know the population mean. <strong>(E.g., fraction of voters who will vote Democrat).</strong></p>
</li>
<li><p class="first">We take several samples (observations). From them we want to estimate the population mean and standard deviation. <strong>(Ask 1000 potential voters; 520 say they will vote Democrat. Sample mean is .52)</strong></p>
</li>
<li><p class="first">We want error bounds on our estimates. <strong>(.52 plus or minus .04, 95 times out of 100)</strong></p>
</li>
<li><p class="first">Another application: testing whether 2 populations have the same mean. <strong>(Is this batch of Guiness as good as the last one?)</strong></p>
</li>
<li><p class="first">Observations cost money, so we want to do as few as possible.</p>
</li>
<li><p class="first">This gets beyond this course, but the biggest problems may be non-math ones. E.g., how do you pick a random likely voter? In the past phone books were used. In a famous 1936 Presidential poll, that biased against poor people, who voted for Roosevelt.</p>
</li>
<li><p class="first">In <strong>probability</strong>, we know the parameters (e.g., mean and standard deviation) of a distribution and use them to compute the probability of some event.</p>
<p>E.g., if we toss a fair coin 4 times what's the probability of exactly 4 heads? Answer: 1/16.</p>
</li>
<li><p class="first">In <strong>statistics</strong> we do not know all the parameters, though we usually know that type the distribution is, e.g., normal. (We often know the standard deviation.)</p>
<ol class="loweralpha simple">
<li>We make observations about some members of the distribution, i.e., draw some samples.</li>
<li>From them we <strong>estimate</strong> the unknown parameters.</li>
<li>We often also compute a confidence interval on that estimate.</li>
<li>E.g., we toss an unknown coin 100 times and see 60 heads. A good estimate for the probability of that coin coming up heads is 0.6.</li>
</ol>
</li>
<li><p class="first">Some estimators are better than others, though that gets beyond this course.</p>
<ol class="loweralpha simple">
<li>Suppose I want to estimate the average height of an RPI student by measuring the heights of N random students.</li>
<li>The mean of the highest and lowest heights of my N students would converge to the population mean as N increased.</li>
<li>However the median of my sample would converge faster. Technically, the variance of the sample median is smaller than the variance of the sample hi-lo mean.</li>
<li>The mean of my whole sample would converge the fastest. Technically, the variance of the sample mean is smaller than the variance of any other estimator of the population mean. That's why we use it.</li>
<li>However perhaps the population's distribution is not normal. Then one of the other estimators might be better. It would be more <strong>robust</strong>.</li>
</ol>
</li>
<li><p class="first">(Enrichment) How to tell if the population is normal? We can do various plots of the observations and look. We can compute the probability that the observations would be this uneven if the population were normal.</p>
</li>
<li><p class="first">An estimator may be <strong>biased</strong>. We have an distribution that is U[0,b] for unknown b. We take a sample. The max of the sample has a mean n/(n+1)b though it converges to b as n increases.</p>
</li>
<li><p class="first">Example 8.2, page 413: One-tailed probability. This is the probability that the mean of our sample is at least so far above the population mean. $$\alpha = P[\overline{X_n}-\mu > c] = Q\left( \frac{c}{\sigma_x / \sqrt{n} } \right)$$ Q is defined on page 169: $$Q(x) = \int_x^ { \infty} \frac{1}{\sqrt{2\pi} } e^{-\frac{x^2}{2} } dx$$</p>
</li>
<li><p class="first">Application: You sample n=100 students' verbal SAT scores, and see $ \overline{X} = 550$. You know that $\sigma=100$. If $\mu = 525$, what is the probability that $\overline{X_n} > 550$ ?</p>
<p>Answer: Q(2.5) = 0.006</p>
</li>
<li><p class="first">This means that if we take 1000 random sample of students, each with 100 students, and measure each sample's mean, then, on average, 6 of those 1000 samples will have a mean over 550.</p>
</li>
<li><p class="first">This is often worded as the probability of the population's mean being under 525 is 0.006, which is different. The problem with saying that is that presumes some probability distribution for the population mean.</p>
</li>
<li><p class="first">The formula also works for the other tail, computing the probability that our sample mean is at least so far <strong>below</strong> the population mean.</p>
</li>
<li><p class="first">The <strong>2-tail probability</strong> is the probability that our sample mean is at least this far away from the sample mean in either direction. It is twice the 1-tail probability.</p>
</li>
<li><p class="first">All this also works when you know the probability and want to know c, the cutoff.</p>
</li>
</ol>
</div>
</div></div>mathjaxhttps://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class27/Thu, 26 Apr 2018 04:00:00 GMT
- Engineering Probability Homework 11 due Mon 2018-04-30 2359 ESThttps://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/homework11/W Randolph Franklin (WRF), RPI<div><div class="section" id="how-to-submit">
<h2>How to submit</h2>
<p>Submit to LMS; see details in syllabus.</p>
</div>
<div class="section" id="questions">
<h2>Questions</h2>
<p>All questions are from the text, starting on page 290.</p>
<p>Each part of a question is worth 5 points.</p>
<ol class="arabic simple">
<li>5.133 on page 302 (3 parts).</li>
<li>6.1, p 348. (4 parts)</li>
<li>6.3 (4 parts).</li>
<li>6.4 (3 parts).</li>
<li>6.32, p 352 (2 parts: mean, covariance).</li>
</ol>
<p>Total: 80 points.</p>
</div></div>mathjaxhttps://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/homework11/Mon, 23 Apr 2018 04:00:00 GMT
- Engineering Probability Class 26 Mon 2018-04-23https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class26/W Randolph Franklin (WRF), RPI<div><div class="contents topic" id="table-of-contents">
<p class="topic-title first">Table of contents</p>
<ul class="auto-toc simple">
<li><a class="reference internal" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class26/#grades" id="id1">1 Grades</a><ul class="auto-toc">
<li><a class="reference internal" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class26/#computation" id="id2">1.1 Computation</a></li>
<li><a class="reference internal" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class26/#notes" id="id3">1.2 Notes</a></li>
<li><a class="reference internal" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class26/#lms" id="id4">1.3 LMS</a></li>
</ul>
</li>
<li><a class="reference internal" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class26/#iclicker-questions" id="id5">2 Iclicker questions</a></li>
<li><a class="reference internal" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class26/#material-from-text" id="id6">3 Material from text</a><ul class="auto-toc">
<li><a class="reference internal" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class26/#section-6-5-page-332-estimation-of-random-variables" id="id7">3.1 Section 6.5, page 332: Estimation of random variables</a></li>
<li><a class="reference internal" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class26/#central-limit-theorem-etc" id="id8">3.2 Central limit theorem etc</a></li>
<li><a class="reference internal" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class26/#chapter-7-p-359-sums-of-random-variables" id="id9">3.3 Chapter 7, p 359, Sums of Random Variables</a></li>
<li><a class="reference internal" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class26/#sums-of-random-variables-ctd" id="id10">3.4 Sums of random variables ctd</a></li>
<li><a class="reference internal" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class26/#chapter-8-statistics" id="id11">3.5 Chapter 8, Statistics</a></li>
<li><a class="reference internal" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class26/#hypothesis-testing" id="id12">3.6 Hypothesis testing</a></li>
</ul>
</li>
</ul>
</div>
<!-- -->
<style> .red {color:red} </style>
<style> .blue {color:blue} </style><div class="section" id="grades">
<h2><a class="toc-backref" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class26/#id1">1 Grades</a></h2>
<div class="section" id="computation">
<h3><a class="toc-backref" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class26/#id2">1.1 Computation</a></h3>
<ol class="arabic">
<li><p class="first">This will accumulate the <strong>total</strong> score.</p>
</li>
<li><p class="first">Normalize each homework to 100 points.</p>
<p>Homeworks that have not yet been graded (that's 9 and up) count for 0.</p>
</li>
<li><p class="first">Sum top 10, multiply result by 0.02, and add into total.</p>
</li>
<li><p class="first">Normalize each exam to 30 points.</p>
</li>
<li><p class="first">Add top 2 into total.</p>
</li>
<li><p class="first">Take the number of sessions in which at least one question was answered.</p>
</li>
<li><p class="first">Divide by the total number of sessions minus 2, to help students who missed up to 2 classes.</p>
</li>
<li><p class="first">Normalize that to 10 points and add into total.</p>
</li>
<li><p class="first">Piazza:</p>
<ol class="loweralpha simple">
<li>Divide the semester into 3 parts: up to first test, from then to last class, and after.</li>
<li>Require two contributions for first part, three for second, and one for last.</li>
<li>Add up the number of contributions (max: 6), normalize to 10 points, add add to total.</li>
</ol>
</li>
<li><p class="first">Add the number of knowitall points to total.</p>
</li>
<li><p class="first">Convert total to a letter grade per the syllabus.</p>
</li>
<li><p class="first">Upload total and letter grades to LMS.</p>
</li>
</ol>
</div>
<div class="section" id="notes">
<h3><a class="toc-backref" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class26/#id3">1.2 Notes</a></h3>
<ol class="arabic simple">
<li>This is guaranteed; your grade cannot be lower (absent detected cheating).</li>
<li>You can compute how latest homeworks would raise it.</li>
</ol>
</div>
<div class="section" id="lms">
<h3><a class="toc-backref" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class26/#id4">1.3 LMS</a></h3>
<ol class="arabic">
<li><p class="first">I uploaded 5 columns to LMS.</p>
</li>
<li><p class="first">There are updated iclicker, piazza, and knowitall numbers.</p>
<p>They should include all updates.</p>
</li>
<li><p class="first">Your total numerical grade is in <strong>Total-423</strong>.</p>
</li>
<li><p class="first">Your letter grade is in <strong>Grade-423</strong>.</p>
</li>
<li><p class="first">Ignore other columns with names like total. They are wrong.</p>
</li>
</ol>
</div>
</div>
<div class="section" id="iclicker-questions">
<h2><a class="toc-backref" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class26/#id5">2 Iclicker questions</a></h2>
<ol class="arabic simple">
<li>X and Y are two uniform r.v. on the interval [0,1]. X and Y are independent. Z=X+Y. What is E[Z]?<ol class="loweralpha">
<li>0</li>
<li>1/2</li>
<li>2/3</li>
<li><ol class="first arabic">
<li>
</li></ol>
</li>
<li><ol class="first arabic" start="2">
<li>
</li></ol>
</li>
</ol>
</li>
<li>Now let W=max(X,Y). What is E[W]?<ol class="loweralpha">
<li>0</li>
<li>1/2</li>
<li>2/3</li>
<li><ol class="first arabic">
<li>
</li></ol>
</li>
<li><ol class="first arabic" start="2">
<li>
</li></ol>
</li>
</ol>
</li>
</ol>
</div>
<div class="section" id="material-from-text">
<h2><a class="toc-backref" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class26/#id6">3 Material from text</a></h2>
<div class="section" id="section-6-5-page-332-estimation-of-random-variables">
<h3><a class="toc-backref" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class26/#id7">3.1 Section 6.5, page 332: Estimation of random variables</a></h3>
<ol class="arabic">
<li><p class="first">Assume that we want to know X but can only see Y, which depends on X.</p>
</li>
<li><p class="first">This is a generalization of our long-running noisy communication channel example. We'll do things a little more precisely now.</p>
</li>
<li><p class="first">Another application would be to estimate tomorrow's price of GOOG (X) given the prices to date (Y).</p>
</li>
<li><p class="first">Sometimes, but not always, we have a prior probability for X.</p>
</li>
<li><p class="first">For the communication channel we do, for GOOG, we don't.</p>
</li>
<li><p class="first">If we do, it's a ''maximum a posteriori estimator''.</p>
</li>
<li><p class="first">If we don't, it's a ''maximum likelihood estimator''. We effectively assume that that prior probability of X is uniform, even though that may not completely make sense.</p>
</li>
<li><p class="first">You toss a fair coin 3 times. X is the number of heads, from 0 to 3. Y is the position of the 1st head. from 0 to 3. If there are no heads, we'll say that the first head's position is 0.</p>
<table border="1" class="docutils">
<colgroup>
<col width="46%">
<col width="54%">
</colgroup>
<thead valign="bottom">
<tr><th class="head">(X,Y)</th>
<th class="head">p(X,Y)</th>
</tr>
</thead>
<tbody valign="top">
<tr><td>(0,0)</td>
<td>1/8</td>
</tr>
<tr><td>(1,1)</td>
<td>1/8</td>
</tr>
<tr><td>(1,2)</td>
<td>1/8</td>
</tr>
<tr><td>(1,3)</td>
<td>1/8</td>
</tr>
<tr><td>(2,1)</td>
<td>2/8</td>
</tr>
<tr><td>(2,2)</td>
<td>1/8</td>
</tr>
<tr><td>(3,1)</td>
<td>1/8</td>
</tr>
</tbody>
</table>
<p>E.g., 1 head can occur 3 ways (out of 8): HTT, THT, TTH. The 1st (and only) head occurs in position 1, one of those ways. p=1/8.</p>
</li>
<li><p class="first">Conditional probabilities:</p>
<table border="1" class="docutils">
<colgroup>
<col width="48%">
<col width="10%">
<col width="10%">
<col width="23%">
<col width="10%">
</colgroup>
<thead valign="bottom">
<tr><th class="head">p(x|y)</th>
<th class="head">y=0</th>
<th class="head">y=1</th>
<th class="head">y=2</th>
<th class="head">y=3</th>
</tr>
</thead>
<tbody valign="top">
<tr><td>x=0</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr><td>x=1</td>
<td>0</td>
<td>1/4</td>
<td>1/2</td>
<td>1</td>
</tr>
<tr><td>x=2</td>
<td>0</td>
<td>1/2</td>
<td>1/2</td>
<td>0</td>
</tr>
<tr><td>x=3</td>
<td>0</td>
<td>1/4</td>
<td>0</td>
<td>0</td>
</tr>
<tr><td> </td>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
</tr>
<tr><td>$g_{MAP}(y)$</td>
<td>0</td>
<td>2</td>
<td>1 or 2</td>
<td>1</td>
</tr>
<tr><td>$P_{error}(y)$</td>
<td>0</td>
<td>1/2</td>
<td>1/2</td>
<td>0</td>
</tr>
<tr><td>p(y)</td>
<td>1/8</td>
<td>1/2</td>
<td>1/4</td>
<td>1/8</td>
</tr>
</tbody>
</table>
<p>The total probability of error is 3/8.</p>
</li>
<li><p class="first">We observe Y and want to guess X from Y. E.g., If we observe $$\small y= \begin{pmatrix}0\\1\\2\\3\end{pmatrix} \text{then } x= \begin{pmatrix}0\\ 2 \text{ most likely} \\ 1, 2 \text{ equally likely} \\ 1 \end{pmatrix}$$</p>
</li>
<li><p class="first">There are different formulae. The above one was the MAP, maximum a posteriori probability.</p>
<p>$$g_{\text{MAP}} (y) = \max_x p_x(x|y) \text{ or } f_x(x|y)$$</p>
<p>That means, the value of $x$ that maximizes $p_x(x|y)$</p>
</li>
<li><p class="first">What if we don't know p(x|y)? If we know p(y|x), we can use Bayes. We might measure p(y|x) experimentally, e.g., by sending many messages over the channel.</p>
</li>
<li><p class="first">Bayes requires p(x). What if we don't know even that? E.g. we don't know the probability of the different possible transmitted messages.</p>
</li>
<li><p class="first">Then use maximum likelihood estimator, ML. $$g_{\text{ML}} (y) = \max_x p_y(y|x) \text{ or } f_y(y|x)$$</p>
</li>
<li><p class="first">There are other estimators for different applications. E.g., regression using least squares might attempt to predict a graduate's QPA from his/her entering SAT scores. At Saratoga in August we might attempt to predict a horse's chance of winning a race from its speed in previous races. Some years ago, an Engineering Assoc Dean would do that each summer.</p>
</li>
<li><p class="first">Historically, IMO, some of the techniques, like least squares and logistic regression, have been used more because they're computationally easy than because they're logically justified.</p>
</li>
</ol>
</div>
<div class="section" id="central-limit-theorem-etc">
<h3><a class="toc-backref" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class26/#id8">3.2 Central limit theorem etc</a></h3>
<ol class="arabic simple">
<li>Review: Almost no matter what distribution the random variable X is, $F_{M_n}$ quickly becomes Gaussian as n increases. n=5 already gives a good approximation.</li>
<li>nice applets:<ol class="loweralpha">
<li><a class="reference external" href="http://onlinestatbook.com/stat_sim/normal_approx/index.html">http://onlinestatbook.com/stat_sim/normal_approx/index.html</a> This tests how good is the normal approximation to the binomial distribution.</li>
<li><a class="reference external" href="http://onlinestatbook.com/stat_sim/sampling_dist/index.html">http://onlinestatbook.com/stat_sim/sampling_dist/index.html</a> This lets you define a distribution, and take repeated samples of a given size. It shows how the means of the samples are distributed. For sample with more than a few observations, they look fairly normal.</li>
<li><a class="reference external" href="http://www.umd.umich.edu/casl/socsci/econ/StudyAids/JavaStat/CentralLimitTheorem.html">http://www.umd.umich.edu/casl/socsci/econ/StudyAids/JavaStat/CentralLimitTheorem.html</a> This might also be interesting.</li>
</ol>
</li>
<li>Sample problems.<ol class="loweralpha">
<li>Problem 7.1 on page 402.</li>
<li>Problem 7.22.</li>
<li>Problem 7.25.</li>
</ol>
</li>
</ol>
</div>
<div class="section" id="chapter-7-p-359-sums-of-random-variables">
<h3><a class="toc-backref" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class26/#id9">3.3 Chapter 7, p 359, Sums of Random Variables</a></h3>
<p>The long term goal of this section is to summarize information from a large
group of random variables. E.g., the mean is one way. We will start with
that, and go farther.</p>
<p>The next step is to infer the true mean of a large set of variables from a
small <strong>sample</strong>.</p>
</div>
<div class="section" id="sums-of-random-variables-ctd">
<h3><a class="toc-backref" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class26/#id10">3.4 Sums of random variables ctd</a></h3>
<ol class="arabic simple">
<li>Let Z=X+Y.</li>
<li>$f_Z$ is convolution of $f_X$ and $f_Y$: $$f_Z(z) = (f_X * f_Y)(z)$$ $$f_Z(z) = \int f_X(x) f_Y(z-x) dx$$</li>
<li>Characteristic functions are useful. $$\Phi_X(\omega) = E[e^{j\omega X} ]$$</li>
<li>$\Phi_Z = \Phi_X \Phi_Y$.</li>
<li>This extends to the sum of n random variables: if $Z=\sum_i X_i$ then $\Phi_Z (\omega) = \Pi_i \Phi_{X_i} (\omega)$</li>
<li>E.g. Exponential with $\lambda=1$: $\Phi_1(\omega) = 1/(1-j\omega)$ (page 164).</li>
<li>Sum of m exponentials has $\Phi(\omega)= 1/{(1-j\omega)}^m$. That's called an m-Erlang.</li>
<li>Example 2: sum of n iid Bernoullis. Probability generating function is more useful for discrete random variables.</li>
<li>Example 3: sum of n iid Gaussians. $$\Phi_{X_1} = e^{j\mu\omega - \frac{1}{2} \sigma^2 \omega^2}$$ $$\Phi_{Z} = e^{jn\mu\omega - \frac{1}{2}n \sigma^2 \omega^2}$$ I.e., mean and variance sum.</li>
<li>As the number increases, no matter what distribution the initial random variance is (provided that its moments are finite), for the sum $\Phi$ starts looking like a Gaussian.</li>
<li>The mean $M_n$ of n random variables is itself a random variable.</li>
<li>As $n\rightarrow\infty$ $M_n \rightarrow \mu$.</li>
<li>That's a <strong>law of large numbers</strong> (LLN).</li>
<li>$E[ M_n ] = \mu$. It's an <strong>unbiased estimator</strong>.</li>
<li>$VAR[ M_n ] = n \sigma ^2$</li>
<li><strong>Weak law of large numbers</strong> $$\forall \epsilon >0 \lim_{n\rightarrow\infty} P[ |M_n-\mu| < \epsilon] = 1$$</li>
<li>How fast does it happen? We can use Chebyshev, though that is very conservative.</li>
<li><strong>Strong law of large numbers</strong> $$P [ \lim _ {n\rightarrow\infty} M_n = \mu ] =1$$</li>
<li>As $n\rightarrow\infty$, $F_{M_n}$ becomes Gaussian. That's the <strong>Central Limit Theorem</strong> (CLT).</li>
</ol>
</div>
<div class="section" id="chapter-8-statistics">
<h3><a class="toc-backref" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class26/#id11">3.5 Chapter 8, Statistics</a></h3>
<ol class="arabic">
<li><p class="first">We have a population. <strong>(E.g., voters in next election, who will vote Democrat or Republican).</strong></p>
</li>
<li><p class="first">We don't know the population mean. <strong>(E.g., fraction of voters who will vote Democrat).</strong></p>
</li>
<li><p class="first">We take several samples (observations). From them we want to estimate the population mean and standard deviation. <strong>(Ask 1000 potential voters; 520 say they will vote Democrat. Sample mean is .52)</strong></p>
</li>
<li><p class="first">We want error bounds on our estimates. <strong>(.52 plus or minus .04, 95 times out of 100)</strong></p>
</li>
<li><p class="first">Another application: testing whether 2 populations have the same mean. <strong>(Is this batch of Guiness as good as the last one?)</strong></p>
</li>
<li><p class="first">Observations cost money, so we want to do as few as possible.</p>
</li>
<li><p class="first">This gets beyond this course, but the biggest problems may be non-math ones. E.g., how do you pick a random likely voter? In the past phone books were used. In a famous 1936 Presidential poll, that biased against poor people, who voted for Roosevelt.</p>
</li>
<li><p class="first">In <strong>probability</strong>, we know the parameters (e.g., mean and standard deviation) of a distribution and use them to compute the probability of some event.</p>
<p>E.g., if we toss a fair coin 4 times what's the probability of exactly 4 heads? Answer: 1/16.</p>
</li>
<li><p class="first">In <strong>statistics</strong> we do not know all the parameters, though we usually know that type the distribution is, e.g., normal. (We often know the standard deviation.)</p>
<ol class="loweralpha simple">
<li>We make observations about some members of the distribution, i.e., draw some samples.</li>
<li>From them we <strong>estimate</strong> the unknown parameters.</li>
<li>We often also compute a confidence interval on that estimate.</li>
<li>E.g., we toss an unknown coin 100 times and see 60 heads. A good estimate for the probability of that coin coming up heads is 0.6.</li>
</ol>
</li>
<li><p class="first">Some estimators are better than others, though that gets beyond this course.</p>
<ol class="arabic simple">
<li>Suppose I want to estimate the average height of an RPI student by measuring the heights of N random students.</li>
<li>The mean of the highest and lowest heights of my N students would converge to the population mean as N increased.</li>
<li>However the median of my sample would converge faster. Technically, the variance of the sample median is smaller than the variance of the sample hi-lo mean.</li>
<li>The mean of my whole sample would converge the fastest. Technically, the variance of the sample mean is smaller than the variance of any other estimator of the population mean. That's why we use it.</li>
<li>However perhaps the population's distribution is not normal. Then one of the other estimators might be better. It would be more <strong>robust</strong>.</li>
</ol>
</li>
<li><p class="first">(Enrichment) How to tell if the population is normal? We can do various plots of the observations and look. We can compute the probability that the observations would be this uneven if the population were normal.</p>
</li>
<li><p class="first">An estimator may be <strong>biased</strong>. We have an distribution that is U[0,b] for unknown b. We take a sample. The max of the sample has a mean n/(n+1)b though it converges to b as n increases.</p>
</li>
<li><p class="first">Example 8.2, page 413: One-tailed probability. This is the probability that the mean of our sample is at least so far above the population mean. $$\alpha = P[\overline{X_n}-\mu > c] = Q\left( \frac{c}{\sigma_x / \sqrt{n} } \right)$$ Q is defined on page 169: $$Q(x) = \int_x^ { \infty} \frac{1}{\sqrt{2\pi} } e^{-\frac{x^2}{2} } dx$$</p>
</li>
<li><p class="first">Application: You sample n=100 students' verbal SAT scores, and see $ \overline{X} = 550$. You know that $\sigma=100$. If $\mu = 525$, what is the probability that $\overline{X_n} > 550$ ?</p>
<p>Answer: Q(2.5) = 0.006</p>
</li>
<li><p class="first">This means that if we take 1000 random sample of students, each with 100 students, and measure each sample's mean, then, on average, 6 of those 1000 samples will have a mean over 550.</p>
</li>
<li><p class="first">This is often worded as the probability of the population's mean being under 525 is 0.006, which is different. The problem with saying that is that presumes some probability distribution for the population mean.</p>
</li>
<li><p class="first">The formula also works for the other tail, computing the probability that our sample mean is at least so far <strong>below</strong> the population mean.</p>
</li>
<li><p class="first">The <strong>2-tail probability</strong> is the probability that our sample mean is at least this far away from the sample mean in either direction. It is twice the 1-tail probability.</p>
</li>
<li><p class="first">All this also works when you know the probability and want to know c, the cutoff.</p>
</li>
</ol>
</div>
<div class="section" id="hypothesis-testing">
<h3><a class="toc-backref" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class26/#id12">3.6 Hypothesis testing</a></h3>
<ol class="arabic simple">
<li>Say we want to test whether the average height of an RPI student (called the population) is 2m.</li>
<li>We assume that the distribution is Gaussian (normal) and that the standard deviation of heights is, say, 0.2m.</li>
<li>However we don't know the mean.</li>
<li>We do an experiment and measure the heights of n=100 random students. Their mean height is, say, 1.9m.</li>
<li>The question on the table is, is the population mean 2m?</li>
<li>This is different from the earlier question that we analyzed, which was this: What is the most likely population mean? (Answer: 1.9m.)</li>
<li>Now we have a hypothesis (that the population mean is 2m) that we're testing.</li>
<li>The standard way that this is handled is as follows.</li>
<li>Define a null hypothesis, called H0, that the population mean is 2m.</li>
<li>Define an alternate hypothesis, called HA, that the population mean is not 2m.</li>
<li>Note that we observed our sample mean to be $0.5 \sigma$ below the population mean, if H0 is true.</li>
<li>Each time we rerun the experiment (measure 100 students) we'll observe a different number.</li>
<li>We compute the probability that, if H0 is true, our sample mean would be this far from 2m.</li>
<li>Depending on what our underlying model of students is, we might use a 1-tail or a 2-tail probability.</li>
<li>Perhaps we think that the population mean might be less than 2m but it's not going to be more. Then a 1-tail distribution makes sense.</li>
<li>That is, our assumptions affect the results.</li>
<li>The probability is Q(5), which is very small.</li>
<li>Therefore we reject H0 and accept HA.</li>
<li>We make a type-1 error if we reject H0 and it was really true. See <a class="reference external" href="http://en.wikipedia.org/wiki/Type_I_and_type_II_errors">http://en.wikipedia.org/wiki/Type_I_and_type_II_errors</a></li>
<li>We make a type-2 error if we accept H0 and it was really false.</li>
<li>These two errors trade off: by reducing the probability of one we increase the probability of the other, for a given sample size.</li>
<li>E.g. in a criminal trial we prefer that a guilty person go free to having an innocent person convicted.</li>
<li>Rejecting H0 says nothing about what the population mean really is, just that it's not likely 2m.</li>
<li><dl class="first docutils">
<dt>(Enrichment) Random sampling is hard. The US government got it wrong here:</dt>
<dd><a class="reference external" href="http://politics.slashdot.org/story/11/05/13/2249256/Algorithm-Glitch-Voids-Outcome-of-US-Green-Card-Lottery">http://politics.slashdot.org/story/11/05/13/2249256/Algorithm-Glitch-Voids-Outcome-of-US-Green-Card-Lottery</a></dd>
</dl>
</li>
</ol>
</div>
</div></div>mathjaxhttps://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class26/Sun, 22 Apr 2018 04:00:00 GMT
- Engineering Probability Class 25 Thu 2018-04-19https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class25/W Randolph Franklin (WRF), RPI<div><div class="contents topic" id="table-of-contents">
<p class="topic-title first">Table of contents</p>
<ul class="auto-toc simple">
<li><a class="reference internal" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class25/#grades" id="id1">1 Grades</a></li>
<li><a class="reference internal" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class25/#handwritten-notes-and-homework-solutions" id="id2">2 Handwritten notes and homework solutions</a></li>
<li><a class="reference internal" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class25/#iclicker-questions" id="id3">3 Iclicker questions</a></li>
<li><a class="reference internal" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class25/#material-from-text" id="id4">4 Material from text</a><ul class="auto-toc">
<li><a class="reference internal" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class25/#joint-distribution-functions-ctd" id="id5">4.1 6.1.2 Joint Distribution Functions, ctd.</a></li>
<li><a class="reference internal" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class25/#independence" id="id6">4.2 6.1.3 Independence</a></li>
<li><a class="reference internal" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class25/#functions-of-several-random-variables" id="id7">4.3 6.2 Functions of several random variables</a><ul class="auto-toc">
<li><a class="reference internal" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class25/#one-function-of-several-random-variables" id="id8">4.3.1 6.2.1 One Function of Several Random Variables</a></li>
<li><a class="reference internal" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class25/#transformations-of-random-vectors" id="id9">4.3.2 6.2.2 Transformations of Random Vectors</a></li>
<li><a class="reference internal" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class25/#pdf-of-general-transformations" id="id10">4.3.3 6.2.3 pdf of General Transformations</a></li>
</ul>
</li>
<li><a class="reference internal" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class25/#expected-values-of-vector-random-variables" id="id11">4.4 6.3 Expected values of vector random variables</a></li>
</ul>
</li>
</ul>
</div>
<!-- -->
<style> .red {color:red} </style>
<style> .blue {color:blue} </style><div class="section" id="grades">
<h2><a class="toc-backref" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class25/#id1">1 Grades</a></h2>
<ol class="arabic simple">
<li>I'll try to upload a guaranteed minimum grade by the end of tomorrow. That will assume that all the grades that I don't yet have are zero.</li>
<li>There will be eleven homeworks.</li>
</ol>
</div>
<div class="section" id="handwritten-notes-and-homework-solutions">
<h2><a class="toc-backref" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class25/#id2">2 Handwritten notes and homework solutions</a></h2>
<p>I added buttons to the page headers that go directly there.</p>
</div>
<div class="section" id="iclicker-questions">
<h2><a class="toc-backref" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class25/#id3">3 Iclicker questions</a></h2>
<ol class="arabic simple">
<li>What is $$\int_{-\infty}^\infty e^{\big(-\frac{x^2}{2}\big)} dx$$?<ol class="loweralpha">
<li>1/2</li>
<li>1</li>
<li>$2\pi$</li>
<li>$\sqrt{2\pi}$</li>
<li>$1/\sqrt{2\pi}$</li>
</ol>
</li>
<li>What is the largest possible value for a correlation coefficient?<ol class="loweralpha">
<li>1/2</li>
<li>1</li>
<li>$2\pi$</li>
<li>$\sqrt{2\pi}$</li>
<li>$1/\sqrt{2\pi}$</li>
</ol>
</li>
<li>The most reasonable probability distribution for the number of defects on an integrated circuit caused by dust particles, cosmic rays, etc, is<ol class="loweralpha">
<li>Exponential</li>
<li>Poisson</li>
<li>Normal</li>
<li>Uniform</li>
<li>Binomial</li>
</ol>
</li>
<li>The most reasonable probability distribution for the time until the next request hits your web server is:<ol class="loweralpha">
<li>Exponential</li>
<li>Poisson</li>
<li>Normal</li>
<li>Uniform</li>
<li>Binomial</li>
</ol>
</li>
<li>If you add two independent normal random variables, each with variance 10, what is the variance of the sum?<ol class="loweralpha">
<li>1</li>
<li>$\sqrt2$</li>
<li>10</li>
<li>$10\sqrt2$</li>
<li>20</li>
</ol>
</li>
</ol>
</div>
<div class="section" id="material-from-text">
<h2><a class="toc-backref" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class25/#id4">4 Material from text</a></h2>
<div class="section" id="joint-distribution-functions-ctd">
<h3><a class="toc-backref" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class25/#id5">4.1 6.1.2 Joint Distribution Functions, ctd.</a></h3>
<ol class="arabic simple">
<li>joint cumulative distribution function, p 305.</li>
<li>marginal cdf’s</li>
<li>joint probability mass function</li>
<li>conditional pmf’s</li>
<li>jointly continuous random variables</li>
<li>joint probability density function.</li>
<li>marginal pdf’s</li>
<li>conditional pdf’s</li>
<li>Example 6.7 Multiplicative Sequence, p 308.</li>
</ol>
</div>
<div class="section" id="independence">
<h3><a class="toc-backref" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class25/#id6">4.2 6.1.3 Independence</a></h3>
<ol class="arabic simple">
<li>Example 6.8 Independence.</li>
</ol>
</div>
<div class="section" id="functions-of-several-random-variables">
<h3><a class="toc-backref" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class25/#id7">4.3 6.2 Functions of several random variables</a></h3>
<div class="section" id="one-function-of-several-random-variables">
<h4><a class="toc-backref" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class25/#id8">4.3.1 6.2.1 One Function of Several Random Variables</a></h4>
<ol class="arabic">
<li><p class="first">Example 6.9 Maximum and Minimum of n Random Variables</p>
<p>Apply this to uniform r.v.</p>
</li>
<li><p class="first">Example 6.11 Reliability of Redundant Systems</p>
<p>Reminder for exponential r.v.:</p>
<ol class="loweralpha simple">
<li>$f(x) = \lambda e^{-\lambda x}$</li>
<li>$F(x) = 1-e^{-\lambda x}$</li>
<li>$\mu = 1/\lambda$</li>
</ol>
<p>I may extend this example to find pdf and mean.</p>
</li>
</ol>
</div>
<div class="section" id="transformations-of-random-vectors">
<h4><a class="toc-backref" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class25/#id9">4.3.2 6.2.2 Transformations of Random Vectors</a></h4>
</div>
<div class="section" id="pdf-of-general-transformations">
<h4><a class="toc-backref" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class25/#id10">4.3.3 6.2.3 pdf of General Transformations</a></h4>
<p>We skip Section 6.2.3. However, a historical note about Student's T distribution:</p>
<p>Student was a pseudonymn of a mathematician working for Guinness in Ireland. He developed several statistical techniques to sample beer to assure its quality. Guinness didn't let him publish under his real name because these were trade secrets.</p>
</div>
</div>
<div class="section" id="expected-values-of-vector-random-variables">
<h3><a class="toc-backref" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class25/#id11">4.4 6.3 Expected values of vector random variables</a></h3>
<ol class="arabic simple">
<li>Section 6.3, page 316, extends the covariance to a matrix. Even with N variables, note that we're comparing only pairs of variables. If there were a complicated 3 variable dependency, which could happen (and did in a much earlier example), all the pairwise covariances would be 0.</li>
<li>Note the sequence.<ol class="loweralpha">
<li>First, the correlation matrix has the expectations of the products.</li>
<li>Then the covariance matrix corrects for the means not being 0.</li>
<li>Finally the correlation coefficents (not shown here) correct for the variances not being 1.</li>
</ol>
</li>
</ol>
</div>
</div></div>mathjaxhttps://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class25/Wed, 18 Apr 2018 04:00:00 GMT
- Engineering Probability Homework 10 due Mon 2018-04-23 2359 ESThttps://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/homework10/W Randolph Franklin (WRF), RPI<div><div class="section" id="how-to-submit">
<h2>How to submit</h2>
<p>Submit to LMS; see details in syllabus.</p>
<p>You may use any computer SW, like Matlab or Mathematica.</p>
</div>
<div class="section" id="questions">
<h2>Questions</h2>
<p>All questions are from the text, starting on page 290.</p>
<p>Each part of a question is worth 5 points.</p>
<ol class="arabic simple">
<li>(10 pts) Problem 5.94 on page 298.</li>
<li>(5 pts) Problem 5.106 on page 298.</li>
<li>(25 pts) Problem 5.111 on page 299.</li>
<li>(10 pts) Problem 5.120 on page 300.</li>
</ol>
<p>Total: 50 points.</p>
</div></div>mathjaxhttps://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/homework10/Mon, 16 Apr 2018 04:00:00 GMT
- Engineering Probability Class 24 Mon 2018-04-16https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class24/W Randolph Franklin (WRF), RPI<div><div class="contents topic" id="table-of-contents">
<p class="topic-title first">Table of contents</p>
<ul class="auto-toc simple">
<li><a class="reference internal" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class24/#material-from-text" id="id1">1 Material from text</a></li>
<li><a class="reference internal" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class24/#tutorial-on-probability-density-2-variables" id="id2">2 Tutorial on probability density - 2 variables</a></li>
<li><a class="reference internal" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class24/#chapter-6-vector-random-variables" id="id3">3 Chapter 6: Vector random variables</a></li>
</ul>
</div>
<!-- -->
<style> .red {color:red} </style>
<style> .blue {color:blue} </style><div class="section" id="material-from-text">
<h2><a class="toc-backref" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class24/#id1">1 Material from text</a></h2>
<ol class="arabic">
<li><p class="first">Example 5.47, page 282: Estimation of signal in noise</p>
<ol class="loweralpha">
<li><p class="first">This is our perennial example of signal and noise. However, here the signal is not just $\pm1$ but is normal. Our job is to find the <em>most likely</em> input signal for a given output.</p>
</li>
<li><p class="first">Important concept in the noisy channel example (with X and N both being
Gaussian): The most likely value of X given Y is
not Y but is somewhat smaller, depending on the relative sizes of
<span class="math">\(\sigma_X\)</span> and <span class="math">\(\sigma_N\)</span>. This is true in spite of <span class="math">\(\mu_N=0\)</span>. It
would be really useful for you to understand this intuitively. Here's
one way:</p>
<p>If you don't know Y, then the most likely value of X is 0. Knowing Y
gives you more information, which you combine with your initial info
(that X is <span class="math">\(N(0,\sigma_X)\)</span> to get a new estimate for the most likely X.
The smaller the noise, the more valuable is Y. If the noise is very
small, then the mostly likely X is close to Y. If the noise is very
large (on average) then the most likely X is still close to 0.</p>
</li>
</ol>
</li>
</ol>
</div>
<div class="section" id="tutorial-on-probability-density-2-variables">
<h2><a class="toc-backref" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class24/#id2">2 Tutorial on probability density - 2 variables</a></h2>
<p>In class 15, I tried to motivate the effect of changing one variable on probability density. Here's a try at motivating changing 2 variables.</p>
<ol class="arabic simple">
<li>We're throwing darts uniformly at a one foot square dartboard.</li>
<li>We observe 2 random variables, X, Y, where the dart hits (in Cartesian coordinates).</li>
<li>$$f_{X,Y}(x,y) = \begin{cases} 1& \text{if}\,\, 0\le x\le1 \cap 0\le y\le1\\ 0&\text{otherwise} \end{cases}$$</li>
<li>$$P[.5\le x\le .6 \cap .8\le y\le.9] = \int_{.5}^{.6}\int_{.8}^{.9} f_{XY}(x,y) dx \, dy = 0.01 $$</li>
<li>Transform to centimeters: $$\begin{bmatrix}V\\W\end{bmatrix} = \begin{pmatrix}30&0\\0&30\end{pmatrix} \begin{bmatrix}X\\Y\end{bmatrix}$$</li>
<li>$$f_{V,W}(v,w) = \begin{cases} 1/900& \text{if } 0\le v\le30 \cap 0\le w\le30\\ 0&\text{otherwise} \end{cases}$$</li>
<li>$$P[15\le v\le 18 \cap 24\le w\le27] = \int_{15}^{18}\int_{24}^{27} f_{VW}(v,w)\, dv\, dw = \frac{ (18-15)(27-24) }{900} = 0.01$$</li>
<li>See Section 5.8.3 on page 286.</li>
</ol>
</div>
<div class="section" id="chapter-6-vector-random-variables">
<h2><a class="toc-backref" href="https://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class24/#id3">3 Chapter 6: Vector random variables</a></h2>
<ol class="arabic simple">
<li>Skip the starred sections.</li>
<li>Examples:<ol class="loweralpha">
<li>arrivals in a multiport switch,</li>
<li>audio signal at different times.</li>
</ol>
</li>
<li>pmf, cdf, marginal pmf and cdf are obvious.</li>
<li>conditional pmf has a nice chaining rule.</li>
<li>For continuous random variables, the pdf, cdf, conditional pdf etc are all obvious.</li>
<li>Independence is obvious.</li>
<li>Work out example 6.5, page 306. The input ports are a distraction.
This problem reduces to a multinomial probability where N is itself a
random variable.</li>
</ol>
</div></div>mathjaxhttps://wrf.ecse.rpi.edu/Teaching/probability-s2018/posts/class24/Sun, 15 Apr 2018 04:00:00 GMT