2.15 More on variances The definition of the variance of a random variable can be used to...
Question:
2.15 More on variances The definition of the variance of a random variable can be used to show a number of additional results.
a.
Show that Var(x) = E(x2) – [E(x)]2.
b.
c.
Use Markov’s inequality (Problem 2.14d) to show that if x can take on only non-negative values, P
[(x − μx) ≥ k] ≤ σ2 x
k 2
.
This result shows that there are limits on how often a random variable can be far from its expec ted value. If k = hσ this result also says that P
[(x − μx) ≥ hσ] ≤ 1 h2
.
Therefore, for example, the probability that a ran dom variable can be more than two standard devi ations from its expected value is always less than 0.25. The theoretical result is called Chebyshev’s inequality.
Equation 2.197 showed that if two (or more)
random variables are independent, the variance of their sum is equal to the sum of their variances.
Use this result to show that the sum of n independent random variables, each of which has expected value µ and variance σ2, has expected value nµ and variance nσ2. Show also that the average of these n random variables (which is also a random variable) will have expected value µ
and variance σ2/n. This is sometimes called the law of large numbers — that is, the variance of an average shrinks down as more independent variables are included.
d.
e.
Use the result from part
(c) to show that if x1 and x2 are independent random variables each with the same expected value and variance, the variance of a weighted average of the two X =kx1 + (1 − k)x2, 0 ≤ k ≤ 1 is minimised when k = 0.5. How much is the variance of this sum reduced by setting k properly relative to other possible values of k?
How would the result from part
(d) change if the two variables had unequal variances?
Step by Step Answer:
Microeconomic Theory Basic Principles And Extensions
ISBN: 9781473729483
1st Edition
Authors: Christopher M Snyder, Walter Nicholson, Robert B Stewart