Answered step by step
Verified Expert Solution
Link Copied!

Question

1 Approved Answer

Math 350 - Homework 3 - Solutions 1. The bus will arrive at a time that is uniformly distributed between 8 and 8 : 30

Math 350 - Homework 3 - Solutions 1. The bus will arrive at a time that is uniformly distributed between 8 and 8 : 30 A.M. If we arrive at 8 A.M., what is the probability that we will wait between 5 and 15 minutes? The probability that we will have to wait between 5 and 15 minutes, having arrived at the bus stop at 8 A.M., is the same as the probability that the bus will arrive between 8 : 05 A.M. and 8 : 15 A.M. This probability is p = (15 5)/30 = 1/3. sh is ar stu ed d vi y re aC s o ou urc rs e eH w er as o. co m 2. Let X be a binomial random variable with parameters (n, p). Explain why P ( ) Z x 2 X np 1 p ex /2 dx. x 2 np(1 p) A binomial random variable X is a sum of i.i.d. random variables X = X1 + + Xn , where each Xi takes values in {0, 1} with probabilities P (Xi = 1) = p, P (Xi = 0) = 1 p. The mean and variance of the Xi are = E[Xi ] = 0P (Xi = 0) + 1P (Xi = 1) = p, and 2 = Var(Xi ) = (0 p)2 P (Xi = 0) + (1 p)2 P (Xi = 1) = p(1 p). Thus the expression under curly brackets is equal to X1 + + Xn n . n Therefore, the claim is an immediate consequence of the central limit theorem applied to the i.i.d. X1 , X2 , . . . . 3. For a Poisson process with rate , find P {N (s) = k|N (t) = n} when s < t. Th Since a Poisson process N (t) is non-decreasing, we know that P {N (s) = k|N (t) = n} = 0 if s < t and k > n. Therefore, we will assume that k n. From the definition of conditional probability, it follows immediately that for any two events A, B of positive probability P (A|B)P (B) = P (B|A)P (A). https://www.coursehero.com/file/6985979/chpt2numbers-25-27-32-35-36-and-urn/ Therefore, P {N (s) = k} P {N (t) = n} P {N (s) = k} = P {N (t s) = n k} P {N (t) = n} P {N (s) = k|N (t) = n} = P {N (t) = n|N (s) = k} k nk e(ts) ((t s)) (n k)! \u0012 \u0013 k n s (t s)nk = . k tn = es (s) n! t k! e (t)n Therefore, P {N (s) = k|N (t) = n} = \u0001 k n s (ts)nk k tn 0 if k n if k > n. sh is ar stu ed d vi y re aC s o ou urc rs e eH w er as o. co m This result makes sense: notice that, by applying the binomial formula, the sum of these conditional probabilities over k, for k = 0, . . . , n, is 1. 4. An urn contains four white and six black balls. A random sample of size 4 is chosen. Let X denote the number of white balls in the sample. An additional ball is now selected from the remaining six balls in the urn. Let Y equal 1 if this ball is white and 0 if it is black. Find (a) E[Y |X = 2]. (b) E[X|Y = 1]. (c) Var(Y |X = 0). (d) Var(X|Y = 1). First observe that the following conditional probabilities are easy to obtain by a simple counting argument: P (Y = 1|X = j) = 4j 2+j , P (Y = 0|X = j) = , 6 6 where j = 0, 1, 2, 3, 4. We also need the probabilities P (Y = 1, X = j). I do this by explicitly counting the elementary outcomes of each event. An elementary outcome can be represented by a string of 0s and 1s of length 5. A moment's thought should convince you that the probability of each string only depends on the number of 0s and 1s, and not on their order. Th (a) (b) P {Y = 1, X = 0} = P {(0, 0, 0, 0, 1)} = 1 65434 = . 10 9 8 7 6 21 P {Y = 1, X = 1} = P {(1, 0, 0, 0, 1), (0, 1, 0, 0, 1), (0, 0, 1, 0, 1), (0, 0, 0, 1, 1)} = 4 4 65443 = . 10 9 8 7 6 21 (c) P {Y = 1, X = 2} = P {(1, 1, 0, 0, 1), (1, 0, 1, 0, 1), (1, 0, 0, 1, 1), (0, 1, 1, 0, 1), (0, 1, 0, 1, 1), (0, 0, 1, 1, 1)} 65432 1 =6 = . 10 9 8 7 6 7 https://www.coursehero.com/file/6985979/chpt2numbers-25-27-32-35-36-and-urn/ 2 (d) P {Y = 1, X = 3} = P {(1, 1, 1, 0, 1), (1, 1, 0, 1, 1), (1, 0, 1, 1, 1), (0, 1, 1, 1, 1)} = 4 43261 2 = . 10 9 8 7 6 105 (e) P {Y = 1, X = 4} = P () = 0. From the above it follows, for example, that P {Y = 1} = 4 X j=0 P {Y = 1, X = j} = 1 4 1 2 42 2 + + + = = . 21 21 7 5 21 105 5 sh is ar stu ed d vi y re aC s o ou urc rs e eH w er as o. co m It is interesting to observe here that we could have obtained this last probability much more easily by the following argument: the probability of each elementary outcome (a1 , . . . , a5 ) only depends on the number of 0s and 1s, and not on their order. Therefore, the probability that the last entry is 1 is the same as the probability that the first entry is one. But this probability is clearly 4/10 = 2/5. We now calculate the asked for values: (a) E[Y |X = 2] = 0P (Y = 0|X = 2) + 1P (Y = 1|X = 2) = P (Y = 1|X = 2) = 13 . 4 P4 P4 + 2 + 32 =1,X=j} = 21 72 105 = 34 . (b) E[X|Y = 1] = j=0 jP {X = j|Y = 1} = j=0 j P {Y P {Y =1} 5 2 2 (c) We know that Var(Y |X = 0) = E[Y |X = 0] E[Y |X = 0] . Since Y only takes values 0, 1, we also have Y 2 = Y . Now, E[Y |X = 0] = 0P {Y = 0|X = 0} + 1P {Y = 1|X = 0} = 4/6 = 2/3. Therefore, 2 Var(Y |X = 0) = 3 \u0012 \u0013 2 2 = . 3 9 (d) We can calculate E[X 2 |Y = 1] as we did for E[X|Y = 1] in part (b). The result is 2 E[X |Y = 1] = 4 21 + 22 32 2 7 + 105 2 5 = 7 . 3 Therefore, Var(X|Y = 1) = E[X 2 |Y = 1] E[X|Y = 1]2 = 7 3 \u0012 \u00132 5 4 = . 3 9 Th We can gain some confidence that the above results are correct by running a simulation of the process. The following computes the conditional expectation E[X|Y = 1] by simulating 500000 trials of the experiment. The conditional expectation is obtained by averaging the number of white balls over only the trials having white 5th ball. m=500000; %Number of trials %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% p=4/10; %Probability that the first ball is white %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% O=zeros(m,5); %Each row of matrix O is a string of five %number from {0,1}, where 1 stands for white %and 0 for black. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% R=rand(m,5); %Matrix R contains all the random numbers we will https://www.coursehero.com/file/6985979/chpt2numbers-25-27-32-35-36-and-urn/ 3 sh is ar stu ed d vi y re aC s o ou urc rs e eH w er as o. co m %need below. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% O(:,1)=(R(:,1)<=p); %Fills the first column of O with 1s with probability %p=4/10 and 0 with probability 6/10. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% W=O(:,1); %W counts the number of white balls at each step of each trial. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% for j=2:5 %We now draw the second, third, ..., fith balls. p = (4*ones(m,1)-W)/(11-j); %Probability of picking a white ball %taking into account the number of white %balls already drawn. O(:,j) = (R(:,j)<=p);%Draw new balls for position j W = sum(O(:,1:j),2); %Count number of white balls already gotten. end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% a=find(O(:,5)==1); %Find the indices of the rows of O corresponding %to trials that have white 5th ball. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% C=O(a,1:4);%C is a matrix containing only the outcomes of the trials %for which the 5th ball is white. It registers only the first %four balls %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% N=sum(C,2);%N is the number of white balls among the first 4 in each trial %trial for which the 5th ball is white. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% [n,k]=size(N);%We need the number n of trials for which the 5th ball is white. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% f=sum(N)/n;%f is the average number of white balls among the first 4 drawn %given that the 5th is white. This is an approximation of %the conditional expectation E[X|Y=1]. One run of this program gives the typical value f = 1.3366. This is to be compared with the theoretical solution above (part (b)), which gives 4/3 = 1.333... Th 5. If X and Y are independent and identically distributed exponential random variables, show that the conditional distribution of X, given that X + Y = t, is the uniform distribution on (0, t). We first need to discuss some preliminaries. Let X, Y be two random variables with joint probability density equal to f (x, y). Suppose we define new random variables U, V in terms of X, Y by U = T1 (X, Y ) and V = T2 (X, Y ), or (U, V ) = T (X, Y ) for short. For example, let (U, V ) = (X, X + Y ). This transformation maps the positive octant of R2 (denoted by R in the figure) to the wedge shaped region T (R). A preliminary question we wish to solve is: what is the joint probability density for the new random variables U, V ? Let us call this density g(u, v). The fundamental observation is that if A denotes an event defined in terms of X, Y (that is, a subset of the region R), then P ((X, Y ) A) = P ((U, V ) T (A)), https://www.coursehero.com/file/6985979/chpt2numbers-25-27-32-35-36-and-urn/ 4 where T (A) is the image of A under the transformation T (an event defined in terms of U, V , hence a subset of T (R)). It is not hard to show that this equality implies, any function F (U, V ), an equality of expectations: E[F (U, V )] = E[F (T (U, V ))], sh is ar stu ed d vi y re aC s o ou urc rs e eH w er as o. co m where the first expectation is calculated using the density g(u, v) and the second using the density f (x, y). This means that ZZ ZZ F (T (x, y))f (x, y) dxdy. F (u, v)g(u, v) dudv = T (R) R From the general formula for change of coordinates in multiple integrals, the integral on the right satisfies: ZZ ZZ F (u, v)f (T 1 (u, v))|D(u, v)| dudv, F (T (x, y))f (x, y) dxdy = T (R) R where |D(u, v)| indicates the absolute value of the Jacobian determinant of the inverse transformation T 1 , namely ! D(u, v) = x u y u x v y v . Since the equality of integrals holds for all functions F , we conclude that g(u, v) = f (T 1 (u, v))|D(u, v)|. Th We can now return to the specific situation of the problem. Here the transformation is T (x, y) = (x, x+ y), which has inverse T 1 (u, v) = (u, v u). Notice that v = z, in the notation of the ! exercise. The 1 0 Jacobian determinant is the absolute value of the determinant of the matrix , which is 1. So 1 1 g(u, v) = f (u, v u). The function f (x, y) is the joint density of two independent, identically distributed exponential random variables X, Y , so f (x, y) is written as the product of the densities of X and Y : Therefore, ex \u0001 ey \u0001 = 2 e(x+y) f (x, y) = 0 g(u, v) = 2 ev 0 https://www.coursehero.com/file/6985979/chpt2numbers-25-27-32-35-36-and-urn/ 5 if x, y 0 if x < 0 or y < 0. if 0 u v if u > v. This shows that the conditional probability density for X given Z = t is g(x|z = t) = R t 2 et 0 2 et dx 0 = 1 t if 0 x t if x > t. Th sh is ar stu ed d vi y re aC s o ou urc rs e eH w er as o. co m But this is exactly the probability density function for a uniform random variable taking values in the interval [0, t]. https://www.coursehero.com/file/6985979/chpt2numbers-25-27-32-35-36-and-urn/ Powered by TCPDF (www.tcpdf.org) 6

Step by Step Solution

There are 3 Steps involved in it

Step: 1

blur-text-image

Get Instant Access to Expert-Tailored Solutions

See step-by-step solutions with expert insights and AI powered tools for academic success

Step: 2

blur-text-image

Step: 3

blur-text-image

Ace Your Homework with AI

Get the answers you need in no time with our AI-driven, step-by-step assistance

Get Started

Recommended Textbook for

An Introduction to Measure Theoretic Probability

Authors: George G. Roussas

2nd edition

128000422, 978-0128000427

More Books

Students also viewed these Mathematics questions