All Matches
Solution Library
Expert Answer
Textbooks
Search Textbook questions, tutors and Books
Oops, something went wrong!
Change your search query and then try again
Toggle navigation
FREE Trial
S
Books
FREE
Tutors
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Hire a Tutor
AI Study Help
New
Search
Search
Sign In
Register
study help
business
principles of managerial statistics
Questions and Answers of
Principles Of Managerial Statistics
7.13 This exercise is regarding Example 7.6 in Section 7.4.(i) Show that the functional g(x) =1 0 x2(t) dt , x ∈ D, is continuous with respect to · .(ii) Derive (7.19).
7.12 Show that Chung’s result (given at the beginning of Section 7.4) implies Smirnov’s LIL (Theorem 7.5).
7.11 Consider a one-sided Kolmogorov–Smirnov test for the null hypothesis (7.9), where F0 is a discrete distribution with the following jumps:x 1 2 3 4 5 6 F0(x) 0.033 0.600 0.833 0.933 0.961
7.10 Show that the following functions g are continuous on (D, · ):(i) g(x) = sup0≤t≤1 x(t);(ii) g(x) = sup0≤t≤1 |x(t)|.
7.9 Show that the Brownian bridge U(t) defined in Section 7.3 satisfies E{U(t)} =0 and that cov{U(s),U(t)} = s ∧ t − st for all 0 ≤ s, t ≤ 1.
7.8 Use Theorem 2.14 and the CLT to show that for any distinct t1, . . . , tk ∈ [0, 1], the joint distribution of Un(t1), . . . , Un(tk) is asymptotically multivariate normal with mean 0 and
7.7 Show that the quantile functional of Example 7.4 is continuous provided that F−1 is continuous in a neighborhood of t . Give a counterexample to show that if F−1 is not continuous in a
7.6 Show that the statistical functional in Example 7.3 is continuous.
7.5 This exercise is regarding Example 7.2.(i) Verify expression (7.8).(ii) Show that G−1 n− I = Gn − I.
7.4 Prove the Glivenko–Cantelli theorem (Theorem 7.2) by an -δ argument.(Hint: See Example 1.6; consider the points j/k, 0 ≤ j ≤ k, k = 1, 2, . . . .)
7.3 Show that for any k ≥ 1 and x1 < · · · < xk, the two random vectors in (7.5)have identical joint distribution.
7.2 Show that Weyl’s sequence Xi , i = 1, 2, . . . , defined in Section 7.1[below (7.3)] has identical uniform distribution in that P(Xi ≤ x) = x, 0 ≤ x ≤ 1 for all i, where P denotes
7.1 Use a computer to draw two realizations of X1, . . . , X10 from the standard normal distribution and then plot the empirical d.f. (7.1) based on each realization of X1, . . . , X10. Compare the
6.38 Suppose that i , i ≥ 1, are i.i.d. such that E(i ) = 0 and E(2 i ) < ∞. Show that∞i=1 cii converges a.s. for any sequence of constants ci , i ≥ 1, such that∞i=1 c2 i < ∞.
6.37 This exercise is associated with the proof of asymptotic normality of the LSE in Section 6.7.(i) Show that the left side of (6.86) is equal to (6.87) and thatn i=1 c2 ni= 1.(ii) Show that
6.36 This exercise is associated with the proof of consistency of the LSE in Section 6.7, where Xni is defined below (6.82).(i) Verify expression (6.82).(ii) Show that (6.83) impliesn i=1 E(X2 ni )
6.35 Consider Example 6.12.(i) Show that in this case we have cF (t) = log(1 + et ) − log 2.(ii) Show that for any x ∈ R, the function dx (t) = xt − cF (t) is strictly concave.(iii) Show that
6.34 This exercise is related to Example 6.11.(i) Show that the functional g(x) = x(1) defines a continuous mapping from C to R.(ii) Show that with probability 1, the set of limit points of g(ηn) is
6.33 Let xn, n ≥ 1, be a sequence of real numbers such that lim inf xn = a, lim sup xn =b, where a
6.32 This exercise is related to Example 6.10.(i) Show that the mapping g(x) = sup0≤t≤1 x(t) is a continuous mapping from C to R.(ii) Show that g(Xn) = sup0≤t≤1 Xn,t = n −1/2 max1≤i≤n
6.31 Show that the distance ρ defined by (6.63) is, indeed, a distance or metric by verifying requirements 1–4 below (6.63).
6.30 Show that if X1,X2, . . . are i.i.d. with mean 0 and variance 1, then ξn =Sn/√2n log log n−→P 0 as n→∞, where Sn =n i=1 Xi . However, ξn does not converge to zero almost surely.
6.29 We see that, in the i.i.d. case, the same condition (i.e., a finite second moment)is necessary and sufficient for both CLT and LIL. In other words, a sequence of i.i.d. random variables obeys
6.28 Let Y1, Y2, . . . be independent such that Yi ∼ χ2 i . Define Xi = Yi − i. Does the sequence Xi , i ≥ 1, obey the LIL (6.51), where an =n i=1 var(Yi )?[Hint: You may use the facts that
6.27 Suppose that Xi , i ≥ 1, are independent random variables such that P(Xi =−iα) = P(Xi = iα) = 0.5i−β and P(Xi = 0) = 1 − i−β, where α, β > 0.According to Theorem 6.16, find the
6.26 Show that if X1,X2, . . . are independent such that Xi ∼ N(0, σ2 i ), where a ≤ σ2 i≤ b and a and b are positive constants, then Xi , i ≥ 1 obeys the LIL (6.51).
6.25 Let X1,X2, . . . be a sequence of independent random variables with mean 0.Let pi = P(|Xi | > bi ), where bi satisfies (6.50), and an =n i=1 var{Xi1(|Xi|≤bi )}.Suppose that∞i=1
6.24 Let X be a random variable. Show that for any μ ∈ R, the following two conditions (i) and (ii) are equivalent:(i) nP(|X − μ| > n) → 0 and E{(X − μ)1(|X−μ|≤n)} → 0;(ii) nP(|X| >
6.23 This exercise is related to Example 6.9.(i) Show that there is c > 0 such that σ2 i= E(X2 i ) ≤ c for all i; hence, an ∝ n.(ii) Show that the right side of (6.60) goes to zero as
6.22 This exercise is related to Example 6.5 (continued) at the end of Section 6.4.(i) Show that for the sequence Yi , i ≥ 1, (6.40) fails provided that s2 = ∞i=1 pi (1 − pi) < ∞. Also show
6.21 Suppose that for each n, Xni, 1 ≤ i ≤ in, are independent such that P(Xni =0) = 1 − pni, P(Xni = −ani ) = P(Xni = ani ) = pni/2, where ani > 0 and 0 < pni < 1. Suppose that max1≤i≤in
6.20 (The delta method). The CLT is often used in conjunction with the delta method introduced in Example 4.4. Here, we continue with Exercise 6.7. Letˆm and ˆp be the MoM estimator of m and p,
6.19 Let the random variables Y1, Y2, . . . be independent and distributed as Bernoulli(i−1), i ≥ 1. Show thatn i=1 Yi − log n √log n−→d N(0, 1).(Hint: You may recall thatn i=1 i−1
6.18 Let Y1, Y2, . . . be independent such that Yi ∼ Poisson(ai ), i ≥ 1, where a > 1. Let Xi = Yi − ai , i ≥ 1, and s2 n=n i=1 var(Yi ) =n i=1 ai =(a − 1)−1(an+1 − 1).(i) Show that
6.17 Suppose that Sn is distributed as Poisson(λn), where λn→∞as n→∞. Use two different methods to show that Sn obeys the CLT; that is, ξn = λ−1/2 n (Sn−λn)−→d N(0, 1).(i) Show
6.16 (Sample median). Let X1, . . . , Xn be i.i.d. observations with the distribution P(X1 ≤ x) = F(x − θ), where F is a cdf such that F(0) = 1/2; hence,θ is the median of the distribution of
6.15 Show that if X1, . . . , Xn are independent Cauchy(0, 1), the sample mean ¯X =n−1(X1 +· · ·+Xn) is also Cauchy(0, 1). Therefore, the CLT does not hold;that is,√n ¯X does not converge to
6.14 This exercise is associated with the proof of Theorem 6.14. Parts (i)–(iii) are regarding the necessity part, where Xni = (Xi − μ)/√n; whereas part (iv) is regarding the sufficiency part,
6.13 Show that the Liapounov condition implies the Lindeberg condition; that is, if (6.35) holds for some δ > 0, then (6.34) holds for every > 0.
6.12 Let Y1, Y2, . . . be independent with Yi ∼ Bernoulli(pi ), i ≥ 1. Show that ∞i=1(Yi − pi ) converges a.s. if and only if∞i=1 pi (1 − pi) < ∞.
6.11 Suppose that X1,X2, . . . is a sequence of independent random variables with finite expectation. Show that ifi=1 1 i E{|Xi − E(Xi )|} < ∞, then the SLLN holds; that is 1 n n i=1 {Xi −
6.10 A sequence of real numbers xi ∈ [0, 1], i ≥ 1, is said to be uniformly distributed in Weyl’s sense on [0, 1] if for any Riemann integrable function f on [0, 1] we have lim n→∞f (x1)+·
6.9 Give an example of a sequence of independent random variables X1,X2, . . .such that∞i=1 var(Xi)/i2 =∞and the SLLN is not satisfied.
6.8 Suppose that for each n, Xni, 1 ≤ i ≤ n, are independent with the common cdf Fn, and Fn−w→ F, where F is a cdf and the weak convergence ( −w→) is defined in Chapter 1 above Example
6.7 (Binomial method of moments). The method of moments (MoM) is widely used to obtained consistent estimators for population parameters. Consider the following special case, in which the
6.6 Suppose that Y1, Y2, . . . are independent random variables. In the following cases, find the conditions for an such that 1ann i=1{Yi − E(Yi )} −→P 0.Give at least one specific example in
6.5 This exercise is regarding Example 6.1 (continued) in Section 6.3.(i) Show that the function ψ(u) is maximized at u =√c, and the maximum is (1 +√c)−2.(ii) Show that the right side of
6.4 This exercise is regarding Example 6.2 and its continuation in Section 6.3.(i) If an = np, show that (6.25) holds if and only ifp > 1/2.(ii) If an = (n i=1 λi )γ , show that (6.25) holds if
6.3 Use Theorem 6.1 to derive the classical result (6.1).
6.2 Show that (6.10) and (6.11) together are equivalent to (6.13).
6.1 Show that in Example 6.2, (6.6)–(6.8) are satisfied with an = (n i=1 λi )γ andγ > 1/2.
5.52 Continue with the previous exercise.(i) In part (c), “It is easy to see that the summand in (5.114) is zero unless 0 ≤ g1, g2 < d/2 and g > 0.”(ii) In part (c), “ On the other hand, if
5.51 This and the next exercises are related to the proof of Lemma 5.7 in Section 5.7. Recall the proof is divided into four parts, (a)–(d). You are asked to explain, or derive when necessary, the
5.50 The proof of Theorem 5.1 for REMLE is very similar to that for MLE.Complete the proof.
5.49 This exercise is related to the proof of Theorem 5.1.(i) Verify the inequalities in (5.109).(ii) Show that the qth moment of |Y |2 is finite.
5.48 Suppose that X1,X2, . . . are independent Exponential(1) random variables.According to Example 5.16, we have E(Xk i ) = k!, k = 1, 2, . . . .(i) Given k ≥ 2, define Yi = (Xi,Xk i ). Show that
5.47 (i) Show that for any random variable X andp > 0, we have E{Xp1(X≥0)} = ∞0 pxp−1P(X ≥ x) dx.[Hint: Note that Xp1(X≥0) =X 0 pxp−11(X≥0) dx =∞0 pxp−11(X≥x) dx.Use the result
5.46 Prove the following extension of (5.80). Let Xi ,Fi , 1 ≤ i ≤ n be a sequence of martingale differences. Then, for any t > 0, Pn i=1 Xi > t, E(Xk i|Fi−1) ≤ k!2 Bk−2ai, k ≥ 2, 1 ≤
5.45 Let Yi ,Fi , 1 ≤ i ≤ n, be a sequence of martingale differences. For anyA > 0, define Xi = Yi1j
5.44 Suppose that X1, . . . , Xn are independent and distributed as N(0, 1).(i) Determine the constants B and ai in (5.79), where Fi = σ(X1, . . . , Xi ).You may use the fact that if X ∼ N(0, 1),
5.43 This exercise is related to Slepian’s inequality (5.97), including some of its corollaries.(i) Show that Slepian’s inequality implies (5.95) and (5.96) (Hint: The right sides of these
5.42 Prove inequality (5.92).
5.41 Consider once again Example 5.16. Show that for any 0 ≤ t ≤√n, P1 √nλn i=1(Yi − λ) > 2t≤ e−t2, P1 √nλn i=1(Yi − λ) < −2t≤ e−t2.How would you interpret the
5.40 Continue on with the martingale extension of Bernstein’s inequality.(i) Prove (5.83).(ii) Derive (5.85), the original inequality of Bernstein (1937), by (5.84).
5.39 This exercise is related to the derivation of (5.80).(i) Let ξn =n k=2(λk/k!)Xk i , n = 2, 3, . . . , and η =∞k=2(λk/k!)|Xi |k.Then we have ξn → ξ =∞k=2(λk/k!)Xk i and |ξn| ≤
5.38 Derive Bernstein’s inequality (5.78).
5.37 Let ξ1, ξ2, . . . be a sequence of random variables such that ξi − ξj ∼N(0, σ2|i − j |) for any i = j , where σ2 is an unknown variance. Show that sup n>m≥1 Emaxm≤k≤n |ξk −
5.36 Suppose that X1, . . . , Xn are i.i.d. random variables such that E(X1) = 0 and E(|X1|p) < ∞, where p ≥ 2. Show that E ⎛⎝ max 1≤k≤n k i=1 Xi p ⎞⎠ = O(np/2).
5.35 Let ξ ∼ N(0, 1), and F(·) be any cdf that is strictly increasing on (−∞,∞).Show that E{ξF(ξ)} > 0. Can you relax the normality assumption?
5.34 Prove Carlson’s inequality: If f ≥ 0 on [0,∞), then ∞0 f (x) dx ≤√π ∞0 f 2(x) dx 1/4 ∞0 x2f 2(x) dx 1/4.[Hint: For anya, b > 0, write ∞0 f (x) dx = ∞0 1 √a + bx2 a +
5.33 Let A and B be any events of a probability space Ω. Show the following:(i) P(A ∩ B) ≤√P(A)P(B). The result is an extension of Exercise 5.7.(ii) P(A B) ≤ [{P(A)}1/p + {P(B)}1/p]p for any
5.32 Let X be a positive random variable. Show that EX X + E(X)≤ 1 2.
5.31 Show that the martingale differences are orthogonal in the sense that E(XiXj ) = 0, i = j (see Example 5.14).
5.30 This exercise is associated with Example 5.13.(i) Show that (5.67) and (5.68) are unbiased in the sense that the expectations of the left sides equal the right sides if μ and σ are the true
5.29 Prove another case of the monotone function inequality: If f is nondecreasing and g is nonincreasing and h ≥ 0, thenf (x)g(x)h(x) dxh(x) dx ≤f (x)h(x) dxg(x)h(x) dx.Furthermore, if f is
5.28 Recall that In and 1n denote respectively the n-dimension identity matrix and vector of 1’s, and Jn = 1n1n. You may use the following result (see Appendix A.1) that |aIn + bJn| = an−1(a +
5.27 Use the facts that for any symmetric matrix A, we have λmin(A) =inf|x|=1(xAx/xx) and λmax(A) = sup|x|=1(xAx/xx) to prove the following string of inequalities. For any symmetric matrices A,
5.26 Show that A ≥ B implies |A| ≥ |B|. (Hint: Without loss of generality, let B > 0. Then A ≥ B iff B−1/2AB−1/2 ≥ I .)
5.25 (Jiang et al. 2001a) let b > 0 anda, ci, 1 ≤ i ≤ n be real numbers. Define the following matrix A =⎛⎜⎜⎜⎜⎜⎝1 a 0 · · · 0 a d c1· · · cn 0 c1 1 · · · 0.......... . ....0
5.24 Prove Schur’s inequality: For any square matrices A and B, we have tr{(A B)2} ≤ A22 B22 .[Hint: Let A = (aij )1≤i,j≤n and B = (bij )1≤i,j≤n. Express the left side in terms of
5.23 Prove inequality (5.49). [Hint: Note that (5.49) is equivalent to tr(AB−1 +BA−1 − 2I) ≥ 0, tr(AB−1) = tr(A1/2B−1A1/2) and tr(BA−1) =tr(A−1/2BB−1/2).]
5.22 This exercise is regarding Example 5.8 (continued) in Section 5.3.2. For parts(i) and (ii) you may use the following matrix identity in Appendix A.1.2: (D±BA−1B)−1 = D−1 ∓ D−1B(A ±
5.21 Prove the product inequality (5.44). [Hint: For any A ≥ 0, we have A ≤λmax(A)I , where I is the identity matrix; use (iii) of Section 5.3.1]
5.20 Derive (5.43) by Minkowski’s inequality (5.17).
5.19 This exercise is regarding Lemma 5.2.(i) Show that by letting ci = 0 if ai = 0 and ci = a−1 i if ai > 0, (5.33) is satisfied for all xi > 0, 1 ≤ i ≤ s.(ii) Prove a special case of Lemma
5.18 (Estimating equations) A generalization of the WLS (see Example 5.8) is the following. Let Y denote the vector of observations and θ a vector of parameters of interest. Consider an estimator of
5.17 Many of the “cautionary tales” regarding extensions of results for nonnegative numbers to nonnegative definite matrices are due to the fact that matrices are not necessarily commutative. Two
5.16 For any matrix X of full rank, the projectionmatrix onto L(X), the linear space spanned by the columns of X, is defined as PX = X(XX)−1X (the definition can be generalized even if X is not
5.15 Show that for any matrix A of real elements, we have AA ≥ 0.
5.14 Show that in Example 5.8 theWLS estimator is given by (5.31), and its covariance matrix is given by (5.32). (Hint: You may use results in Appendix A.1 on differentiation of matrix expressions.)
5.13 Prove the following inequality. For any x1, . . . , xn, we have1≤i =j≤n x3 i x5 j≤1≤i =j≤n x6 i x2 j .Can you generalize the result?
5.12 This exercise is related to Example 5.5.(i) Use the monotone function technique to prove the following inequality:ex ≤ 1 + x + x2 2 − b , |x| ≤b, where b < 2. (Hint: Take the logarithm of
5.11 This exercise is regarding the inequality (5.26).(i) Verify the identity (5.27).(ii) Complete the proof of (5.26).(iii) Suppose that f (x) and g(x) are both strictly increasing, or both strictly
5.10 This exercise is regarding the latter part of Example 5.6.(i) By using the same arguments, show that I2 ≤ exp−λ − λB2 2 − λBn.(ii) Show that the function h(λ) defined by (5.20)
5.9 Prove the left-side inequality in (5.18); that is log(1 + x) ≥ x − x2 2, x≥ 0.
5.8 Derive the conditions for equality in Minkowski’s inequality (5.17).
5.7 Let xi, . . . , xn be real numbers. Define a probability on the space X ={x1, . . . , xn} by P(A) = # of xi ∈ A nfor any A ⊂ X. Show that P(A ∩ B) ≤P(A)P(B).[Hint: Note that # of xi ∈ A
5.6 Verify the identity (5.13).
5.5 A pdf f (x) is called log-concave if log{f (x)} is concave. Show that the following pdf’s are log-concave:(i) the pdf of N(0, 1);(ii) the pdf of χ2ν , where the degrees of freedom ν ≥
5.4 Show that for any ai > 0, bi > 0 and λi ≥ 0, 1 ≤ i ≤ n such thatn i=1 λi =1, we have ni=1 aλi i+n i=1 bλi i≤n i=1(ai + bi )λi .When does the equality hold?
Showing 1400 - 1500
of 3052
First
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
Last