All Matches
Solution Library
Expert Answer
Textbooks
Search Textbook questions, tutors and Books
Oops, something went wrong!
Change your search query and then try again
Toggle navigation
FREE Trial
S
Books
FREE
Tutors
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Hire a Tutor
AI Study Help
New
Search
Search
Sign In
Register
study help
business
principles of managerial statistics
Questions and Answers of
Principles Of Managerial Statistics
=+(ii) Give an example to show that if k increases with n (i.e., k = kn → ∞as n → ∞), the conclusion of (i) may not be true.
=+2.9. Suppose that for each 1 ≤ j ≤ k, ξn,j , n = 1, 2,..., is a sequence of random variables such that ξn,j P−→ 0 as n → ∞. Define ξn = max1≤j≤k |ξn,j |.(i) Show that if k is
=+For (ii), use Theorem 2.8 and also note that by exchanging the order of summation and expectation, one can show for any > 0,∞n=1 nP(|X1| > n) < ∞.
=+a.s. −→ 0 as n → ∞.Hint: For (i), first show that for any a > 0, max 1≤i≤n|Xi| ≤ a +n i=1|Xi|1(|Xi|>a).
=+2.8. Continue with Problem 2.7 with an = n. Show the following:(i) If E(|X1|) < ∞, then ξn L1−→ 0 as n → ∞.(ii) If E(X2 1 ) < ∞, then ξn
=+(iii) F is the N(0, 1) distribution.46 2 Modes of Convergence(iv) F is the Cauchy(0, 1) distribution.
=+where an, n = 1, 2,..., is a sequence of positive constants. Determine an for the following cases such that ξn P−→ 0 as n → ∞:(i) F is the Uniform[0, 1] distribution.(ii) F is the
=+2.7. Let X1,...,Xn be independent random variables with a common distribution F. Defineξn = max1≤i≤n |Xi|an, n ≥ 1,
=+2.6. Use Theorem 2.5 and the -δ argument to prove Theorem 2.6.
=+1.21. Show that if f(x) is continuous on [a, b] and differentiable on (a, b)and there is x0 ∈ (a,b) such that f(x0) > f(a) ∨ f(b), or f(x0) < f(a) ∧ f(b), then there is x∗ ∈ (a,b) such
=+1.14. In the proof of the Cram´er consistency given in Section 1.4, show that Pθ(n i=1 Yi > 0) → 1.
=+i.i.d. vector-valued observations that have the common joint pdf f(x|θ), whereθ is a vector-valued parameter with the parameter space Θ ∈ Rp (p ≥ 1).
=+1.13. Extend the proof of the Cram´er consistency given in Section 1.4 to the case of multivariate observations and parameters; that is, X1,...,Xn are
=+1.9. Suppose that X is a continuous random variable with a unique median a (see Example 1.5). Show that P(X ≤ x) < 1/2, x 1/2, x>a.
=+18 1 The -δ Arguments(b) Show that ζ(x) is continuous on [a, ∞) for the same a.(c) Is ζ(x) differentiable on [a, ∞)? If so, find an expression of ζ(x) in terms of an infinite series.
=+1.7. The Riemann’s ζ-function is defined as the infinite seriesζ(x) = ∞n=1 1nx .(a) Show that ζ(x) is uniformly convergent for x ∈ [a, ∞), where a is any number greater than 1.
=+(a) Use Cauchy’s criterion to show that the sequence converges.(b) Find the limit of the sequence. Does the limit depend on the initial values a0 and a1?
=+1.5. A sequence an, n = 0, 1,..., is defined as follows. Starting with initial values a0 and a1, let an+1 = 3 2an − 1 2an−1, n = 1, 2,....
=+where ν is the degrees of freedom (d.f.) of the t-distribution. Show that the pdf of the t-distribution converges to that of the standard normal distribution as the d.f. goes to infinity; that
=+1.4. The Student’s t-distribution has extensive statistical applications. It is defined as a continuous distribution with the following pdf:φ(x|ν) = Γ{(ν + 1)/2}√νπΓ(ν/2) 1 +
=+1.1. Use the -δ argument to show that for any a ∈ (−∞, ∞), 1.6 Exercises 171 +1 na−→ 1 as n → ∞.
9.30 Show that ˆp −a→.s. p as n→∞if and only if P(ˆp = p for large n) = 1. Here, n = (n1, n2) is large if and only if both n1 and n2 are large. Let ˜b and ˆb denote the Y-W estimator ofb,
9.29 Continue with the previous exercise.(i) Find an expression for the LS estimator ofb, the vector of the AR coefficients, that minimizes (9.75). Is the LS estimator different from the Y-W
9.28 Suppose that Xt is a spatial AR series satisfying X(t1,t2) − 0.5X(t1−1,t2) − 0.5X(t1,t2−1) + 0.25X(t1−1,t2−1) = W(t1,t2),t ∈ Z2, where Wt is a spatial WN(0, σ2) series.(i) What is
9.27 Suppose that Wt , t ∈ Z2, satisfy (9.63) and E(|Wt |p) < ∞ for some p > 1.Use the TMD extension of Burkholder’s inequality [i.e., (9.70) and (9.71)] to establish the following SLLN:
9.26 Let ni = ([ei1 ], [ei2 ]) for i = (i1, i2), where [x] represents the largest integer≤ x. Show thati≥1(log |ni |)−a < ∞ for sufficiently largea, where |n| =n1n2 for any n = (n1, n2).
9.25 Let Wt , t ∈ Z2, be a WN(0, σ2) spatial series and u = v. Show that (9.68) is bounded a.s. if and only ifγˆ (u, v) − γ (u, v) = O-log log |n||n|a.s., where γ (u, v) and γˆ (u, v)
9.24 Let Xt , t ∈ Z2 be a strictly stationary spatial series. Define the invariant σ-fields X−1( ¯ τU), X−1( ¯ τV ), and X−1( ¯ τ) as in Section 9.6 (above Theorem 9.8)with replaced
9.23 Show that (9.63) is a weaker TMD condition than (9.62).
9.22 Show that if t , t ∈ Z2, is strictly stationary, then (9.57) holds if and only if (9.47) holds for all t .
9.21 (i) Show that (9.47) with P(t) = P1(t) implies (9.55) and (9.56).(ii) Give an example of a spatial series t satisfying (9.55) but not (9.56).(iii) Show that (9.57) implies (9.58) for all m ≥
9.20 This exercise is related to Example 9.7.(i) It was shown that t , t ∈ Z2, is a TMD with respect to P(t) = P2(t), but not necessarily a TMD with respect to P(t) = P1(t). Is t , t ∈ Z2, a
9.19 Let Wt be as in Example 9.6. Define t = W(t1+1,t2−1). Show that t , t ∈ Z2, is a TMD with respect to P(t) = Pj (t), j = 1, 2, 3, 4.
9.18 Verify that the spatial series t defined in Example 9.6 is a TMD with respect to P(t) = Pj (t), j = 2, 3, 4 (the case j = 1 was already verified).
9.17 Draw diagrams of the different pasts, Pj (t), j = 1, 2, 3, 4, defined in Section 9.6 and show P1(t) ⊃ P2(t) ⊃ P3(t) ⊃ P4(t).
9.16 (i) Show that for any random variable X, EX2{log(|X|)}d−1 log log(|X|)< ∞, where d > 1, implies E(X2) < ∞.(ii) Give an example of a random variable X such that E X2 log log(|X|)< ∞but
9.15 Regarding the diagonal method of rearranging a spatial series as a time series(see the second paragraph of Section 9.6), write the order of terms in the summation in (9.39) for the following
9.14 Suppose that Xt , t ∈ N2, is an i.i.d. spatial series. Show that (9.39) holds when n1 = n2→∞, provided that E{|X(1,1)|} < ∞.
9.13 Verify the reversed ARMA model (9.37) as well as (9.38).
9.12 Verify the Yule–Walker equation (9.33) as well as (9.34).
9.11 Show that the autocovariance function of an ARMA(p, q) process can be expressed as (9.31). Furthermore, there is a constant c > 0 such that |γ (h)| ≤cρ−h, h ≥ 0, where ρ > 1 is the
9.10 Show that the Wold coefficients of an ARMA process Xt in (9.30) satisfy the following:(i) φ0 = 1.(ii) φjρj → 0 as j →∞, where ρ > 1 is the number in (9.29).
9.9 Show that the Kullback–Leibler information defined by (9.21) is ≥ 0 with equality holding if and only if f = g a.e.; that is, f (x) = g(x) for all x /∈ A, where A has Lebesgue measure zero.
9.8 Show that if the innovations t is a Gaussian WN(0, σ2) process with σ2 > 0, then (9.18) holds.
9.7 Suppose that Xt and Y(t) are both second-order stationary and the two time series are independent with the same mean and autocovariance function.Define a “coded” time series as Zt =Xt if t is
9.6 Suppose that Xt , t ∈ Z, is second-order stationary. Show that if the nth-order covariance matrix of Xt , Γn = [γ (i − j)]1≤i,j≤n, is singular, then there are constants aj, 0 ≤ j ≤
9.5 The time series Xt , t ∈ Z, satisfies (i) E(X2 t ) < ∞, (ii) E(Xt ) = μ, a constant, and (iii) E(XsXt ) = ψ(t − s) for some function ψ, for s, t ∈ Z. Show that Xt , t ∈ Z, is
9.4 (Brownian motion and WN). Recall a stochastic process B(t), t ≥ 0, a Brownian motion if it satisfies (i) B(0) = 0, (ii) for any 0 ≤ s < t, B(t) − B(s) ∼ N(0, t − s), and (iii) the
9.3 (Poisson process and WN). A stochastic process P(t), t ≥ 0 is called a Poisson process if it satisfies the following: (i) P(0) = 0; (ii) for any 0 ≤ s < t and nonnegative integer k, P{P(t)
9.2 Let Xt , t ∈ Z2, be a spatial series such that E(X2 t ) < ∞ for any t . Show that the following two statements are equivalent:(i) E(Xt ) is a constant and E(Xs+hXt+h) = E(XsXt ) for all s, t,
9.1 Verify the basic properties (i)–(iii) of an autocovariance function [below (9.4)in Section 9.1].
8.38 Verify that the sequence ξn, n ∈ Z, in the input/output system considered at the end of Section 8.9 satisfies the Markovian property (8.62).
8.37 This exercise is related to the proof of Theorem 8.15.(i) Prove Lemma 8.9.(ii) Verify (8.61).
8.36 Show that the array of random variables defined in Example 8.22 satisfies(8.59) but not (8.58).
8.35 Show that if ξni, 1 ≤ i ≤ kn, are independent N(0, 1) random variables, then (8.58) holds if and only if (8.59) holds.
8.34 Show that the model in Example 8.20 can be expressed as (8.55) (this includes determination of the number s and matrices X, Z1, . . . , Zs ) with all the subsequent assumptions satisfied.
8.33 Show that the function ξn defined by (8.50) is simply linear interpolations between the points(0, 0),U2 1U2 n, S1 Un, . . . ,1, Sn Un.
8.32 Show that if ξn, n ≥ 1 is a sequence of random variables and F is a continuous cdf, then sup−∞
8.31 [MA(1) process]. A time series Xt , t ∈ T = {. . . ,−1, 0, 1, . . . }, is said to be a moving-average process of order 1, denoted by MA(1), if it satisfies Xt = t + θt−1 for all t ,
8.30 Let Z0,Z1, . . . be independent N(0, 1) random variables. Find a suitable sequence of normalizing constants, an, such that 1ann i=1 Zi−1Zi−→d N(0, 1)and justify your answer. For the
8.29 This exercise is related to Example 8.15.(i) Verify (8.37).(ii) Show that n −1 n i=1 Xi −→P h(μ + α). [Hint: Use a result derived in the example on E(Xi |Fi−1) and Example 8.14.]
8.28 Derive the classical CLT from the martingale CLT; that is, show by Theorem 8.7 that if X1,X2, . . . are i.i.d. with E(Xi ) = 0 and E(X2 i ) = σ2 ∈(0,∞), then n−1/2n i=1 Xi−→d N(0,
8.27 Suppose that ξ1, ξ2, . . . are independent such that ξi ∼ Bernoulli(pi ), where pi ∈ (0, 1), i ≥ 1. Show that as n→∞, 1 nn i=1ξ1 · · · ξi−1(ξi − pi )−a→.s. 0.
8.26 This exercise is related to Example 8.14.(i) Show that condition (8.25) implies (8.27) and (8.28) for every i ≥ 1.(ii) Show that E(|X|) < ∞implies (8.29).(iii) Show that (8.27) and E(|X|
8.25 Give two examples to show that (8.22) and (8.24) do not imply each other. In other words, construct two sequences of martingale differences so that the first sequence satisfies (8.22) but not
8.24 In this exercise you are asked to provide a proof for part (ii) of Theorem 8.5.(i) Let Yi = Xi/ai , i ≥ 1. Show that E(Y 2 i|Fi−1) ≤bi if E(|Xi |p|Fi−1) ≤ a pi b p/2 ia−p i b
8.23 In this exercise you have an opportunity to practice the stopping time technique that we used in Example 8.13 by giving a proof for part (i) of Theorem 8.5.(i) Show that for any p ∈ (0, 1) and
8.22 Show, by Example 8.7, that the τ defined in Example 8.13 is a stopping time with respect to the σ-fields Fn, n ≥ 1.
8.21 This exercise is associated with Example 8.12.(i) Show that the sequence Sn, Fn, n ≥ 1, is a martingale with E(Sn) = 0, n ≥ 1.(ii) Show that 3 × 2i−2 > n if and only if i > ln.(iii) Show
8.20 The proof of Theorem 8.3 is fairly straightforward. Try it.
8.19 Suppose that Xn, n ≥ 1, are m-dependent as in Exercise 8.11 and E(Xn) =μ ∈ (−∞,∞), n ≥ 1. Let τ be a stopping time with respect to Fn =σ(X1, . . . , Xn) such that E(τ ) < ∞.
8.18 Suppose that Sn, Fn, n ≥ 1, is a submartingale. Show that conditions (i) and(ii) are equivalent:(i) condition (8.17) and E(|S1|) < ∞;(ii) condition (8.19).
8.17 Complete the arguments for submartingale and supermartingale in Example 8.8.
8.16 Show that if τ1 and τ2 are both stopping times with respect to Fn, n ≥ 1, then{τ1 ≤ τ2} ∈ Fτ2 .
8.15 Let Xi , i ≥ 1, be i.i.d. with cdf F. Define τk as in Exercise 8.14. Also, letωF = sup{x : F(x) < 1}. Show that (i)–(iii) are equivalent:(i) τk < ∞a.s. for every k ≥ 1.(ii) τk <
8.14 (Record-breaking time). Let Xn, n ≥ 1 be a sequence of random variables.Define τ1 = 1 andτk+1 =inf{n > τk : Xn > Xτk} if τk < ∞ and {n ≥ 1 : Xn > Xτk} = ∅∞ otherwise, k ≥ 1.
8.13 (U-statistics). A sequence of random variables Xn, n ≥ 1, is said to be exchangeable if for any n > 1 and any permutation i1, . . . , in of 1, . . . , n,(Xi1, . . . , Xin ) has the same
8.12 Suppose that X1, . . . , Xn are i.i.d. with finite expectation. Define Sk =k i=1 Xi , Mk = (n − k + 1)−1Sn−k+1, and Fk = σ(Sn, . . . , Sn−k+1), 1 ≤ k ≤ n. Show that Mk, Fk, 1 ≤
8.11 A sequence of random variables Xn, n ≥ 1, is said to be m-dependent if for any n ≥ 1, σ(X1, . . . , Xn) and σ(Xn+m+1, . . . ) are independent. Suppose, in addition, that E(Xn) = 0, n ≥
8.10 Prove properties (i)–(iv) of Lemma 8.7.
8.9 Prove properties (i) and (ii) of Lemma 8.6.
8.8 Verify properties (i)–(iii) of Lemma 8.5.
8.7 Verify properties (i)–(iv) of Lemma 8.4.
8.6 Show that Yi , Fi, 1 ≤ i ≤ n, in Example 8.3 is a sequence of martingale differences.
8.5 Prove Lemma 8.3.
8.4 Show that Sn, Fn, n ≥ 1, in Example 8.2 is a martingale.
8.3 Prove Lemma 8.1 (note that the “only if” part is obvious).
8.2 Show that if Sn, Fn, n ≥ 1, is a martingale according to the extended definition (8.2), then Sn, n ≥ 1, is a martingale according to (8.1). Give an example to show that the converse is not
8.1 This exercise is in connection with the opening problem on casino gambling(Section 8.1).(i) Show that whenever the gambler wins, he recovers all his previous losses plus an additional $5.(ii)
7.24 Verify the second identity in (7.43).
7.23 Verify properties (i)–(iv) for the ROC curve and ODC defined in Section 7.8.
7.22 Show that in Example 7.10 we have N{,F,Lr(P )} < ∞, ∀ > 0.
7.21 Give a specific example of a stationary ϕ-mixing (but not i.i.d.) sequence that satisfies the conditions of Theorem 7.17.
7.20 Show that Billingsley’s theorem on weak convergence of the empirical process of the stationary ϕ-mixing sequence (Theorem 7.17) implies the Doob–Donsker theorem (Theorem 7.4); so the former
7.19 For the weighted empirical process defined by (7.34), verify the covariance function (7.35).
7.18 This exercise is regarding (7.33), which is a key condition in Lemma 7.1.Hoeffding (1956) originally required g(k) + g(k + 2) > 2g(k + 1), 0 ≤ k ≤ n − 2, (7.45)instead of (7.33) (also see
7.17 This exercise explores some properties of the function Ih defined by (7.29).(i) Take h = 1. Show that I1 is nondecreasing on (0, 1), I1(λ) = 2λ2 +O(λ3) as λ → 0, and I1(λ)→∞as λ →
7.16 Derive (7.28) using the general result of Section 6.6.2.
7.15 Show that for any 0 < a ≤ 1/2 and λ > 0, we have Psup 0≤h≤a sup 0≤t≤1−h|Un(t + h) − Un(t)| ≥ λ√a≤ 160 aexp−λ2 32ψ√λan, where π is the function defined by (7.22).
7.14 Verify the following properties for the function ψ defined by (7.22):;(a) ψ(u) is nonincreasing for u ≥ −1 with ψ(0) = 1;(b) uψ(u) is nondecreasing for u ≥ −1;(c) ψ(u) ∼ (2 log
Showing 1300 - 1400
of 3052
First
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
Last