All Matches
Solution Library
Expert Answer
Textbooks
Search Textbook questions, tutors and Books
Oops, something went wrong!
Change your search query and then try again
Toggle navigation
FREE Trial
S
Books
FREE
Tutors
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Hire a Tutor
AI Study Help
New
Search
Search
Sign In
Register
study help
business
principles of managerial statistics
Questions and Answers of
Principles Of Managerial Statistics
=+6.37. This exercise is associated with the proof of asymptotic normality of the LSE in Section 6.7.
=+(iv) Show that for A = (−∞, 1/2 − ) ∪ (1/2 + , ∞) with 0 0.
=+6.35. Consider Example 6.12.(i) Show that in this case we have cF (t) = log(1 + et) − log 2.(ii) Show that for any x ∈ R, the function dx(t) = xt − cF (t) is strictly concave.(iii) Show that
=+6.33. Let xn, n ≥ 1, be a sequence of real numbers such that lim inf xn = a, lim sup xn =b, where a
=+(iii) Show that P(sup0≤t≤1 Wt ≤ λ) = 0 for λ < 0.
=+(ii) Show that g(Xn) = sup0≤t≤1 Xn,t = n−1/2 max1≤i≤n Si and g(W) =sup0≤t≤1 Wt.
=+6.32. This exercise is related to Example 6.10.(i) Show that the mapping g(x) = sup0≤t≤1 x(t) is a continuous mapping from C to R.
=+6.31. Show that the distance ρ defined by (6.63) is, indeed, a distance or metric by verifying requirements 1–4 below (6.63).212 6 Sums of Independent Random Variables
=+ξn does not converge to zero almost surely. This gives another example that convergence in probability does not necessarily imply almost sure convergence.
=+6.30. Show that if X1, X2,... are i.i.d. with mean 0 and variance 1, thenξn = Sn/√2n log log n P−→ 0 as n → ∞, where Sn = n i=1 Xi. However,
=+(iii) Show by Theorem 6.12 that Xi, i ≥ 1, does not obey the CLT. (Hint:You may use the result of Example 1.6.)Wittmann (1985) further showed that the sequence obeys the LIL. On the other hand,
=+(ii) Show that Lindeberg’s condition (6.42) does not hold for = 1.
=+(i) Show that E(Xi) = 0 and σ2 i = E(X2 i ) = 1, therefore an = s2 n = n i=1 σ2 i = n. It follows that (6.40) is satisfied.
=+but not identically distributed. For example, Wittmann (1985) constructed the following example. Let nk be an infinite sequence of integers such that nk+1 > 2nk, k ≥ 1. Let X1, X2,... be
=+6.29. We see that, in the i.i.d. case, the same condition (i.e., a finite second moment) is necessary and sufficient for both CLT and LIL. In other words, a sequence of i.i.d. random variables
=+r, then E(Y ) = r, var(Y )=2r, and E(Y − r)4 = 12r(r + 4).]
=+6.28. Let Y1, Y2,... be independent such that Yi ∼ χ2 i . Define Xi = Yi − i.Does the sequence Xi, i ≥ 1, obey the LIL (6.51), where an = n i=1 var(Yi)?[Hint: You may use the facts that if
=+α, β > 0. According to Theorem 6.16, find the condition for α, β so that Xi, i ≥ 1, obeys the LIL (6.51).
=+6.27. Suppose that Xi, i ≥ 1, are independent random variables such that P(Xi = −iα) = P(Xi = iα)=0.5i−β and P(Xi = 0) = 1 − i−β, where
=+6.15 and the Borel–Cantelli lemma (Lemma 2.5).]
=+6.25. Let X1, X2,... be a sequence of independent random variables with mean 0. Let pi = P(|Xi| > bi), where bi satisfies (6.50), and 6.8 Exercises 211 an = n i=1 var{Xi1(|Xi|≤bi)}.Suppose that
=+(iv) Give an example of a random variable X such that nP(|X| > n) → 0 and E{X1(|X|≤n)} = 0 for any n ≥ 1 and E(|X|) = ∞.
=+6.24. Let X be a random variable. Show that for any μ ∈ R, the following two conditions (i) and (ii) are equivalent:(i) nP(|X − μ| > n) → 0 and E{(X − μ)1(|X−μ|≤n)} → 0;(ii)
=+(ii) Show that for Xi = Yi − pi, the three series (6.30)–(6.32) converge for c = 1.
=+6.22. This exercise is related to Example 6.5 (continued) at the end of Section 6.4.(i) Show that for the sequence Yi, i ≥ 1, (6.40) fails provided that s 2 = ∞i=1 pi(1 − pi) < ∞. Also
=+(iii) An alternative estimator of m was found (i.e., ˜m). Show that ˜m is not asymptotically normal even though it is consistent; that is, √n( ˜m − m) does not converge in distribution to a
=+(ii) Show that the MoM estimators ˆm and ˆp are jointly asymptotically normal in the sense that√nmˆ − m pˆ− p d−→ N(0, V ), where V is another covariance matrix. Find V .
=+6.20 (The delta method). The CLT is often used in conjunction with the delta method introduced in Example 4.4. Here, we continue with Exercise 6.7.Let ˆm and ˆp be the MoM estimator of m and p,
=+6.19. Let the random variables Y1, Y2,... be independent and distributed as Bernoulli(i−1), i ≥ 1. Show thatn i=1 Yi − log n √log n d−→ N(0, 1).(Hint: You may recall that n i=1 i−1
=+(i) Show that Liapounov’s condition (6.35) is not satisfied with δ = 2.(ii) Show that as n → ∞, a − 1 an+1 − 11/2n i=1(Xi − ai) d −→ N(0, 1).(Hint: Use the result of the
=+6.18. Let Y1, Y2,... be independent such that Yi ∼ Poisson(ai), i ≥ 1, where a > 1. Let Xi = Yi − ai, i ≥ 1, and s2 n = n i=1 var(Yi) = n i=1 ai =(a − 1)−1(an+1 − 1).
=+(ii) Let Yni, 1 ≤ i ≤ n, be independent and distributed as Poisson(n−1λn), n ≥ 1. Show that n i=1 Yni has the same distribution as Sn. Furthermore, show that Xni = λ−1/2 n (Yni −
=+6.17. Suppose that Sn is distributed as Poisson(λn), where λn → ∞ as n → ∞. Use two different methods to show that Sn obeys the CLT; that is,ξn = λ−1/2 n (Sn − λn) d−→ N(0,
=+(i) For any x ∈ R, let Sn,x be the number of Xi’s exceeding x/√n. Show that X(m) ≤ x/√n if and only if Sn,x ≤ m − 1.(ii) Show that √n{X(m) − θ} d−→ N(0, σ2), where σ2 =
=+6.16 (Sample median). Let X1,...,Xn be i.i.d. observations with the distribution P(X1 ≤ x) = F(x − θ), where F is a cdf such that F(0) = 1/2;hence, θ is the median of the distribution of X1.
=+6.8 Exercises 209 hence, E{(X1 − μ)2} = limn→∞ E{(X1 − μ)21(|X1−μ|≤√n)} = σ2.(iv) Show that for any > 0,n i=1 P(|Xni| > ) ≤ nP(|X1| > λn) → 0.
=+(ii) Show that (6.43) with = 1 reduces to√nE{(X1 − μ)1(|X1−μ|≤√n)} → 0.(iii) Show that E{(X1 − μ)21(|X1−μ|≤√n)} − [E{(X1 − μ)1(|X1−μ|≤√n)}]2 −→ σ2;
=+(i) Show that for any > 0, max 1≤i≤n P(|Xni| > ) = P(|X1 − μ| > √n) → 0.(Note: You may not use Chebyshev’s inequality to show this—why?)
=+6.14. This exercise is associated with the proof of Theorem 6.14. Parts(i)–(iii) are regarding the necessity part, where Xni = (Xi − μ)/√n; whereas part (iv) is regarding the sufficiency
=+Let Xi, i ≥ 1, be independent and Uniform[0, 1] distributed. Show that the sequence Xi, i ≥ 1, is uniformly distributed in Weyl’s sense on [0, 1] almost surely. (Hint: Use §1.5.2.37. Note
=+6.10. A sequence of real numbers xi ∈ [0, 1], i ≥ 1, is said to be uniformly distributed in Weyl’s sense on [0, 1] if for any Riemann integrable function f on [0, 1] we have limn→∞f(x1) +
=+208 6 Sums of Independent Random Variables= #{1 ≤ i ≤ n : Xni ≤ x}n .Show that Fˆn(x) P−→ F(x) for every x at which F is continuous.
=+6.8. Suppose that for each n, Xni, 1 ≤ i ≤ n, are independent with the common cdf Fn, and Fn w−→ F, where F is a cdf and the weak convergence( w−→) is defined in Chapter 1 above Example
=+(iii) Is ˆm necessarily an integer? Since m needs to be an integer, a modified estimator of m is ˜m, defined as the nearest integer to ˆm. Show that ˜m is also a consistent estimator of m in
=+Note that the left sides of these equations depend on m and p. By solving the equations, one obtains the solutions, say, ˆm and ˆp.(i) Solve the MoM equations to find the solutions ˆm and
=+6.7 (Binomial method of moments). The method of moments (MoM) is widely used to obtained consistent estimators for population parameters. Consider the following special case, in which the
=+(ii) Yi ∼ Uniform[μi − di, μi + di], i ≥ 1, where Uniform[a, b] represents the Uniform distribution over [a, b], and di > 0.
=+(i) Yi ∼ DE(μi, σi), i ≥ 1, where DE(μ, σ) is the Double Exponential distribution with pdf f(x|μ, σ) = (1/2σ)e−|x−μ|/σ, −∞ 0.
=+6.6. Suppose that Y1, Y2,... are independent random variables. In the following cases, find the conditions for an such that 1ann i=1{Yi − E(Yi)} P−→ 0.Give at least one specific example in
=+(ii) Show that the right side of (6.29) is minimized when bi = 1+ai, which is greater than √ai, and the minimum is 4{π(1 + ai)}−1.
=+6.5. This exercise is regarding Example 6.1 (continued) in Section 6.3.(i) Show that the function ψ(u) is maximized at u = √c, and the maximum is (1 + √c)−2.
=+6.8 Exercises 207(iii) Suppose that the assumption that a ≤ λi ≤ b fora, b > 0 is not made.Instead, the only assumption is that λi > 0 for all i. Does the result of (i)necessarily hold?
=+6.3.(i) If an = np, show that (6.25) holds if and only if p > 1/2.(ii) If an = (n i=1 λi)γ, show that (6.25) holds if and only if γ > 1/2.
=+6.4. This exercise is regarding Example 6.2 and its continuation in Section
=+5.49. This exercise is related to the proof of Theorem 5.1.(i) Verify the inequalities in (5.107).(ii) Show that the qth moment of |Y |2 is finite.
=+where zα is the α-critical value of N(0, 1); that is, P(Z ≤ zα)=1 − α for Z ∼ N(0, 1).(iv) Show that the inequality in (iii) is sharp in the sense that for anyη > (1 − α)2, there is k
=+(iii) Show that for any 0
=+5.48. Suppose that X1, X2,... are independent Exponential(1) random variables. According to Example 5.16, we have E(Xk i ) = k!, k = 1, 2,....(i) Given k ≥ 2, define Yi = (Xi, Xk i ). Show that
=+(ii) Show that if X1,...,Xn are independent and symmetrically distributed about zero, then for any p > 0, E max 1≤m≤n Smp 1(max1≤m≤n Sm≥0)≤ 2E{Sp n1(Sn≥0)}, where Sm = n i=1 Xi.
=+5.47. (i) Show that for any random variable X and p > 0, we have E{Xp1(X≥0)} = ∞0 pxp−1P(X ≥ x) dx.[Hint: Note that Xp1(X≥0) = X 0 pxp−11(X≥0) dx = ∞0 pxp−11(X≥x) dx. Use
=+5.7 Exercises 171[Hint: Define Yi = Xi1(E(Xk i |Fi−1)≤0.5k!Bk−2ai,k≥2). Show that Yi, Fi, 1 ≤ i ≤n, is also a sequence of martingale differences and satisfies (5.79) (with Xi replaced
=+5.46. Prove the following extension of (5.80). Let Xi, Fi, 1 ≤ i ≤ n be a sequence of martingale differences. Then, for any t > 0, Pn i=1 Xi > t,E(Xk i |Fi−1) ≤k!2 Bk−2ai, k ≥ 2, 1
=+(iv) Derive the following inequality. For any λ, A > 0 and p > 1, there is a constant c depending only on p such that Pmax 1≤m≤nm i=1 Yi≥ λ,i
=+(iii) Show that m i=1 Yi = m i=1 Xi, 1 ≤ m ≤ n, on {i
=+(i) Show that Xi, Fi, 1 ≤ i ≤ n, is also a sequence of martingale differences.Here, the summation j
=+(iii) Can you improve the inequality obtained in (ii) by using the fact that ni=1 Xi ∼ N(0, n)?
=+(ii) Determine the right side of inequality (5.84) with t = n for any > 0.
=+5.44. Suppose that X1,...,Xn are independent and distributed as N(0, 1).(i) Determine the constants B and ai in (5.79), where Fi = σ(X1,...,Xi).You may use the fact that if X ∼ N(0, 1), then
=+(v) Suppose that the correlations ρij depend on a single parameter θ; that is, ρij = ρij (θ), θ ∈ Θ, where ρij (·) are nondecreasing functions. Show that the probability in (iv) is also
=+(iv) Show that for any fixed ρkl, (k,l) = (i, j) and a = (a1,...,an) ∈ Rn, the probability P[∩n i=1{Xi ≤ ai}] is strictly increasing in ρij ∈ Rij .
=+(ρkl)1≤k,l≤n is positive definite} is an interval. (Hint: It suffices to show that if Σ is positive definite when ρij = ρij and ρij , it remains so for any ρij ≤ρij ≤ ρij .
=+(iii) Show that for any fixed ρkl, (k,l) = (i, j), the set Rij = {ρij : Σ =
=+for any λ ∈ [0, 1].
=+(i) Show that Slepian’s inequality implies (5.93) and (5.94) (Hint: The right sides of these inequalities are the probabilities on the left sides when all of the correlations ρij are zero.)(ii)
=+5.43. This exercise is related to Slepian’s inequality (5.95), including some of its corollaries.170 5 Inequalities
=+5.42. Prove inequality (5.90).
=+(ii) Derive (5.85), the original inequality of Bernstein (1937), by (5.84).
=+5.40. Continue on with the martingale extension of Bernstein’s inequality.(i) Prove (5.83).
=+(iii) Show that the function g(λ) is minimized for λ ∈ (0, B−1) at (5.82), and the minimal value is g(λ) = − A 2B2, 1 +2Bt A − 12.
=+(ii) Based on the result of (i) and using the dominated convergence theorem(Theorem 2.16), show that limn→∞ E(ξn|Fi−1) = E(ξ|Fi−1).
=+5.39. This exercise is related to the derivation of (5.80).(i) Let ξn = n k=2(λk/k!)Xk i , n = 2, 3,..., and η = ∞k=2(λk/k!)|Xi|k.Then we have ξn → ξ = ∞k=2(λk/k!)Xk i and |ξn| ≤
=+5.35. Let ξ ∼ N(0, 1), and F(·) be any cdf that is strictly increasing on(−∞, ∞). Show that E{ξF(ξ)} > 0. Can you relax the normality assumption?5.36. Suppose that X1,...,Xn are i.i.d.
=+5.34. Prove Carlson’s inequality: If f ≥ 0 on [0, ∞), then ∞0 f(x) dx ≤ √π ∞0 f 2(x) dx 1/4 ∞0 x2f 2(x) dx 1/4.[Hint: For anya, b > 0, write ∞0 f(x) dx = ∞0 1√a + bx2 a
=+(iv) Given the proved result that Mc(·) is strictly increasing, show that for any d within the range of Mc, there is a unique σ such that Mc(σ) = d.(Hint: All you have to show is that Mc is
=+(iii) Show that the function μc(·) is continuously differentiable. (Hint: You may use some well-known results in calculus on differentiability of implicit functions.)
=+5.30. This exercise is associated with Example 5.13.(i) Show that (5.67) and (5.68) are unbiased in the sense that the expectations of the left sides equal the right sides if μ and σ are the
=+(ii) For the matrices A and B in (ii), verify inequality (5.53).
=+an unknown mean, α and i’s are independent random variables such that E(α) = 0, var(α) = σ2, E(i) = 0, var(i) = τ 2, cov(i, j ) = 0, i = j, and cov(α, i) = 0 for any i.(i) Show that
=+5.28. Recall that In and 1n denote respectively the n-dimension identity matrix and vector of 1’s, and Jn = 1n1n. You may use the following result(see Appendix A.1) that |aIn + bJn| = an−1(a +
=+5.26. Show that A ≥ B implies |A|≥|B|. (Hint: Without loss of generality, let B > 0. Then A ≥ B iff B−1/2AB−1/2 ≥ I.)5.27. Use the facts that for any symmetric matrix A, we have
=+5.25. (Jiang et al. 2001) let b > 0 anda, ci, 1 ≤ i ≤ n be real numbers.Define the following matrix A =⎛⎜⎜⎜⎜⎜⎝1 a 0 ··· 0 adc1 ··· cn 0 c1 1 ··· 0... ... ... ... ...0 cn
=+Z. Show that the positive eigenvalues of S = A(δI +AA)−2A are λi(δ + λi)−2, 1 ≤ i ≤ m, where λi, 1 ≤ i ≤ m, are the positive eigenvalues of AA. [Hint: The positive eigenvalues
=+5.7 Exercises 167(iv) Show that (XX)−1X = λ−1/2 min (XX).(v) Write A = P
=+(iii) Continuing with (ii), show that B(γ)=(XX)−1X(I − ZQ−1ZP).
=+(ii) Furthermore, let Q = δI + ZP Z. Show by continuing with (i) that B(γ)=(XX)−1X{I + ZQ−1Z(I − P)}(I − ZH−1Z).
=+5.22. This exercise is regarding Example 5.8 (continued) in Section 5.3.2.For parts (i) and (ii) you may use the following matrix identity in Appendix A.1.2: (D ± BA−1B)−1 = D−1 ∓
=+(ii) Prove a special case of Lemma 5.2; that is, (5.34) holds when A1,...,As are pairwise commutative (see Exercise 5.17).
=+5.19. This exercise is regarding Lemma 5.2.(i) Show that by letting ci = 0 if ai = 0 and ci = a−1 i if ai > 0, (5.33) is satisfied for all xi > 0, 1 ≤ i ≤ s.
=+where M∗(θ) = W∗(θ)u(Y,θ). Here, we assume that W∗(θ) does not depend on parameters other than θ (why?). Otherwise, a procedure similar to the EBLUE is necessary (see Example 5.8).
=+Using a similar argument to that in the proof of Lemma 5.1, show that the best estimator ˆθ corresponds to the estimating equation W∗(θ)u(Y,θ)=0, where W∗(θ) = E(∂u/∂θ){Var(u)}−1,
Showing 1100 - 1200
of 3052
First
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Last