All Matches
Solution Library
Expert Answer
Textbooks
Search Textbook questions, tutors and Books
Oops, something went wrong!
Change your search query and then try again
Toggle navigation
FREE Trial
S
Books
FREE
Tutors
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Hire a Tutor
AI Study Help
New
Search
Search
Sign In
Register
study help
business
principles of managerial statistics
Questions and Answers of
Principles Of Managerial Statistics
=+11.2. Verify (11.13)–(11.15).
=+11.1. Verify (11.6)–(11.8).
=+10.41. Verify that for the diffusion process in Example 10.16, the density function (10.57) reduces to e−2|x−μ|
=+10.40. Show that the process W(a), a ≥ 0, defined below (10.54) is a Brownian motion (Hint: Use Lemmas 10.1 and 10.2).
=+(iii) Show, by taking expectations under the integral signs, that (10.50)implies (10.51); then obtain the heat equation by taking the partial derivatives with respect to t on both sides of (10.51).
=+(ii) Verify that the function u(t, x) defined by (10.49) satisfies the heat equation.
=+10.39. This exercise is related to the heat equation (Example 10.15).(i) Verify that the pdf of N(x, t), f(y, t, x) = 1√2πt exp−(y − x)2 2t, −∞
=+10.38. Verify that τa defined by (10.42) is a stopping time (whose definition is given above Lemma 10.2).
=+10.37. Let B(t), t ≥ 0, be Brownian motion and a < 0
=+10.8 Exercises 355
=+10.36. This exercise is associated with Example 10.11.(i) Verify (10.36) and (10.37).(ii) Show, by using Lindeberg–Feller’s theorem (Theorem 6.11; use the extended version following that
=+10.35. Show that the Brownian bridge U(t), 0 ≤ t ≤ 1 (defined below Theorem 10.15), is a Gaussian process with mean 0 and covariances cov{U(s), U(t)} = s(1 − t), s ≤ t.
=+10.34. Prove the SLLN for Brownian motion (Theorem 10.14). [Hint: First, show the sequence B(n)/n, n = 1, 2,..., converges to zero almost surely as n → ∞; then show that B(t) does not oscillate
=+10.33. Let B(t), t ≥ 0, be a Brownian motion. Show that B2(t) − t, t ≥ 0, is a continuous martingale in that for any s
=+10.32. Let B(t), t ≥ 0, be a standard Brownian motion. Show that each of the following is a standard Brownian motion:(1) (Scaling relation) a−1B(a2t), t ≥ 0, where a = 0 is fixed.(2) (Time
=+10.5) and the result of (i) to show that (X, Y ) and (X, Z) have the same distribution. The reflection principle then follows.
=+10.31. In this exercise you are encouraged to give a proof of the reflection principle of Brownian motion (Theorem 10.13).(i) Show that if X, Y , and Z are random vectors such that (a) X and Y are
=+10.30. This exercise shows how to justify assumption (ii) given assumption(i) of Brownian motion (see Section 10.5). Suppose that assumption (i) holds such that E{B(t) − B(s)} = 0, E{B(t) −
=+10.28. This exercise is related to the proof of Theorem 10.10.(i) Verify (10.29).(ii) Show that ut → −x as t → ∞.(iii) Derive (10.31).
=+10.26. Show that the renewal function has the following expression:m(t) = ∞n=1 Fn(t), where Fn(·) is the cdf of Sn.354 10 Stochastic Processes
=+(ii) Show that E(ξn) = 1 for every n, and therefore does not converge to E(0) = 0 as n → ∞.
=+10.25. Let U be a random variable that has the Uniform(0, 1) distribution.Define ξn = n1(U≤n−1), n ≥ 1.(i) Show that ξn a.s. −→ 0 as n → ∞.
=+10.24. Give a proof of Theorem 10.7. [Hint: Note that SN(t) ≤ t
=+10.21. Derive (10.25) and (10.26). Also obtain the corresponding results for a Poisson process using Theorem 10.5.
=+10.18. For the third definition of a Poisson process, derive the pdf of Sn, the waiting time until the nth event. To what family of distribution does the pdf belong?
=+(iii) Determine the stationary distribution for the chain.
=+10.17. Consider a birth and death chain with two reflecting barriers (i.e., the state space is {0, 1,...,l}); the transition probabilities are given as in Example 10.4 for 1 ≤ i ≤ l − 1; q0 =
=+10.8 Exercises 353(iv) Show that the simple random walk (see Example 10.2) with 0
=+10.4.(i) Show that the chain is irreducible if pi > 0, i ≥ 0, and qi > 0, i ≥ 1.(ii) Show that the chain is aperiodic if ri > 0 for some i.(iii) Show that the chain has period 2 if ri = 0 for
=+10.16. This exercise is related to the birth and death chain of Example
=+10.15. Consider a Markov chain with states 0, 1, 2,... such that p(i, i+1) =pi and p(i, i − 1) = 1 − pi, where p0 = 1. Find the necessary and sufficient condition on the pi’s for the chain to
=+10.14. Show that positive recurrency implies recurrency. Also show that positive (null) recurrency is a class property.
=+(v) Derive (10.14) using (10.13) and independence of X and Y . [Note that the first inequality in (10.14) is obvious.]
=+(iv) Show that is−10 ≤ X(js) ≤ is−1 on A, where A is defined in (10.12).
=+(iii) Show that (10.12) holds for any possible values j1,...,js of T1,...,Ts, respectively.
=+10.6. This exercise is related to Example 10.1 (continued in Section 10.2).(i) Show that the one- and two-step transition probabilities of the Markov chain are given by (10.9) and (10.10),
=+10.5. Show that the process Xn, n ≥ 1, in Example 10.2 is a Markov chain with transition probability p(i, j) = aj−i, i, j ∈ S = {0, ±1,...}.
=+10.4. Derive the Chapman–Kolmogorov identity (10.7).
=+10.3. Show that the finite-dimensional distributions of a Markov chain Xn, n = 0, 1, 2,..., are determined by its transition probability p(·, ·) and initial distribution p0(·).
=+10.1. This exercise is related to Example 10.1.(i) Verify that the locations of the sequence of digits formed either by the professor or by the student, as shown in Table 10.2, satisfy (10.1).(ii)
=+(iii) Write out the Y-W equation (9.72) for the current model.
=+(ii) Verify that the minimum phase property (9.73) is satisfied.
=+(i) What is the order p of the spatial AR model? What are the coefficients?
=+9.28. Suppose that Xt is a spatial AR series satisfying X(t1,t2) − 0.5X(t1−1,t2) − 0.5X(t1,t2−1) + 0.25X(t1−1,t2−1) = W(t1,t2), t ∈ Z2, where Wt is a spatial WN(0, σ2) series.
=+9.27. Suppose that Wt, t ∈ Z2, satisfy (9.63) and E(|Wt|p) < ∞ for some p > 1. Use the TMD extension of Burkholder’s inequality [i.e., (9.70) and(9.71)] to establish the following SLLN:
=+(iii) Show that Xt = E{W4 t |F1(t−)}, t ∈ Z2, is strictly stationary. Hint:Suppose that E{W4 0 |F1(0−)} = gWs, s 1< 0a.s.for some function g, where s 1< 0 if and only if s1 < 0 or s1 = 0
=+(ii) Suppose that E(|W0|q) < ∞ for some q > 4. Use the above ergodic theorem and the facts in (i) to argue that lim sup|n|→∞1|n|t¯≤n|Wt|4p < ∞ a.s.for some p > 1.
=+(iii) Is there a general rule that you can draw from Example 9.7 and this exercise?
=+9.9 Exercises 313(i) n1 = n2 = k, k = 3;(ii) n2 = n2 = k, k = 4;(iii) n1 = 2k, n2 = k, k = 3;(iv) n2 = 2k, n2 = k, k = 4.
=+9.15. Regarding the diagonal method of rearranging a spatial series as a time series (see the second paragraph of Section 9.6), write the order of terms in the summation in (9.39) for the following
=+9.4 (Brownian motion and WN). Recall a stochastic process B(t), t ≥0, a Brownian motion if it satisfies (i) B(0) = 0, (ii) for any 0 ≤ s
=+where λ is a positive constant; and (iii) the process has independent increments; that is, for any n > 1 and 0 ≤ t0 < t1 < ··· < tn, the random variables P(tj ) − P(tj−1), j = 1,...,n,
=+9.3 (Poisson process and WN). A stochastic process P(t), t ≥ 0 is called a Poisson process if it satisfies the following: (i) P(0) = 0; (ii) for any 0 ≤ s
=+8.10 Exercises 281
=+(ii) max1≤i≤n(Zi−1Zi)2 ≤ n i=0 Z4 i .
=+often useful in establishing (8.31). Also, note the following facts (and you do need to verify them):(i) For any M > 0, we have max 1≤i≤n |Zi−1Zi| ≤ 2M2 +n i=0 Z2 i 1(|Zi|>M).
=+8.30. Let Z0, Z1,... be independent N(0, 1) random variables. Find a suitable sequence of normalizing constants, an, such that 280 8 Martingales 1ann i=1 Zi−1Zi d−→ N(0, 1)and justify your
=+(iv) Use the result of (iii) and Kronecker’s lemma to show that n−1n i=1 Zi a.s. −→ 0, n−1n i=1 Wi a.s. −→ 0;hence, (8.26) can be strengthened to a.s. convergence under the stronger
=+(iii) Show that (8.27) and E(|X| log+ |X|) < ∞ implies∞i=1 i−1E{|Xi|1(|Xi|>i)} < ∞.
=+(ii) Show that E(|X|) < ∞ implies (8.29).
=+8.26. This exercise is related to Example 8.14.(i) Show that condition (8.25) implies (8.27) and (8.28) for every i ≥ 1.
=+(ii) Use a special case of Theorem 8.4 with p = 2 to complete the proof of part (ii) of Theorem 8.5.
=+8.10 Exercises 279[Hint: First show that E(Y 2 i |Fi−1) ≤ a−2 i {E(|Xi|p|Fi−1)}2/p. In the case that E(|Xi|p|Fi−1) > ap i b p/2 i , write {E(|Xi|p|Fi−1)}2/p as
=+8.24. In this exercise you are asked to provide a proof for part (ii) of Theorem 8.5.(i) Let Yi = Xi/ai, i ≥ 1. Show that E(Y 2 i |Fi−1) ≤bi if E(|Xi|p|Fi−1) ≤ ap i b p/2 ia−p i b
=+Note that here it is not required that the Xi’s are martingale differences.
=+(iii) Use Fatou’s lemma to show that limn→∞τ∧n i=1|Xi|ai< ∞ a.s.and hence ∞i=1 |Xi|/ai < ∞ a.s. on {τ = ∞} = {∞i=1 E(|Xi|p|Fi−1)/ap i ≤B} for any B > 0.(iv) Conclude that
=+(ii) Use a similar stopping time technique as in Example 8.13 to show that Eτ∧n i=1|Xi|ai≤ B.
=+8.23. In this exercise you have an opportunity to practice the stopping time technique that we used in Example 8.13 by giving a proof for part (i) of Theorem 8.5.(i) Show that for any p ∈ (0, 1)
=+8.18. Suppose that Sn, Fn, n ≥ 1, is a submartingale. Show that conditions(i) and (ii) are equivalent:(i) condition (8.17) and E(|S1|) < ∞;(ii) condition (8.19).278 8 Martingales
=+n, Fn, n ≤ N − m is a martingale.[Hint: First show that E(Um,n|Gn+1) = Um,n+1 a.s.]8.14 (Record-breaking time). Let Xn, n ≥ 1 be a sequence of random variables. Define τ1 = 1 andτk+1
=+1≤j1
=+8.13 (U-statistics). A sequence of random variables Xn, n ≥ 1, is said to be exchangeable if for any n > 1 and any permutation i1,...,in of 1,...,n,(Xi1 ,...,Xin ) has the same distribution as
=+8.10 Exercises 277 Sn = E n+m i=1 XiFn, n ≥ 1. Show that Sn, Fn, n ≥ 1, is a martingale. Note that Example 8.2 is a special case of this exercise with m = 0.
=+8.11. A sequence of random variables Xn, n ≥ 1, is said to be m-dependent if for any n ≥ 1, σ(X1,...,Xn) and σ(Xn+m+1,...) are independent. Suppose, in addition, that E(Xn) = 0, n ≥ 1.
=+8.5. Prove Lemma 8.3.
=+example to show that the converse is not necessarily true.
=+8.2. Show that if Sn, Fn, n ≥ 1, is a martingale according to the extended definition (8.2), then Sn, n ≥ 1, is a martingale according to (8.1). Give an
=+(iv) Now, suppose the coin is biased so that the probability of landing head is 0.4 instead of 0.5. What happends this time when you play the games in (iii)?
=+(iii) Use a computer to simulate 100 sequences of plays. Each play consists of a betting and flipping a fair coin. You win if the coin lands head, and you lose otherwise. Start with $5, then follow
=+(ii) Suppose that the maximum bet on the casino table is $500 and your initial bet is $5. How many consecutive times can you bet with the martingale strategy?
=+(i) Show that whenever the gambler wins, he recovers all his previous losses plus an additional $5.
=+8.1. This exercise is in connection with the opening problem on casino gambling (Section 8.1).
=+7.21. Give a specific example of a stantionary ϕ-mixing (but not i.i.d.)sequence that satisfies the conditions of Theorem 7.17.
=+7.1. Hoeffding (1956) originally required g(k) + g(k + 2) > 2g(k + 1), 0 ≤ k ≤ n − 2, (7.45)instead of (7.33) (also see Shorack and Wellner 1986, p. 805). Show by a simple argument that this
=+7.18. This exercise is regarding (7.33), which is a key condition in Lemma
=+(7.29) is attained. Find the limit of tλ as λ → 0.
=+(iii) Continue part (ii). Let tλ be the value of t at which the infimum in
=+(ii) Take h(t) = − log{t(1 − t)}. Show that Ih(λ) ∼ (eλ)2/8 as λ → 0 and Ih(λ) → ∞ as λ → 1.
=+(i) Take h = 1. Show that I1 is nondecreasing on (0, 1), I1(λ)=2λ2+O(λ3)as λ → 0, and I1(λ) → ∞ as λ → 1.
=+7.17. This exercise explores some properties of the function Ih defined by(7.29).
=+(g) uψ(u) equals 0 and −2 respectively for u = 0 and −1 and has derivative 1 for u = 0;(h) for |u| < 1, we have the Taylor expansionψ(u)=1 − u 3 + u2 6 − u3 10 + ··· + (−1)k2uk(k +
=+7.14. Verify the following properties for the function ψ defined by (7.22):(a) ψ(u) is nonincreasing for u ≥ −1 with ψ(0) = 1;(b) uψ(u) is nondecreasing for u ≥ −1;(c) ψ(u) ∼ (2 log
=+(ii) Derive (7.19).
=+7.13. This exercise is regarding Example 7.6 in Section 7.4.(i) Show that the functional g(x) = 1 0 x2(t) dt, x ∈ D, is continuous with respect to ·.
=+7.9 Exercises 237 vectors (Z1,...,Z5) as above and evaluate the right side of (7.44) with λ =1.095 by Monte-Carlo method.
=+(ii) The observed value of √nD−n in Wood and Alravela (1978) was 1.095.For each sample size n, where n = 30, 100, and 200, generate 10,000 random
=+1 given below (7.9), so the statistic D−n of (7.12) is considered.(i) Show that for any λ > 0, limn→∞ P(√nD−n > λ)=1 − P(Z1 ≤ λ, . . . , Z5 ≤ λ), (7.44)where (Z1,...,Z5) has a
=+7.11. Consider a one-sided Kolmogorov–Smirnov test for the null hypothesis (7.9), where F0 is a discrete distribution with the following jumps:x 123456 F0(x) 0.033 0.600 0.833 0.933 0.961
=+7.1. Use a computer to draw two realizations of X1,...,X10 from the standard normal distribution and then plot the empirical d.f. (7.1) based on 236 7 Empirical Processes each realization of
=+6.8 Exercises 213(i) Show that the left side of (6.86) is equal to (6.87) and that n i=1 c2 ni = 1.(ii) Show that (6.88) holds for every λ ∈ Rp provided that (6.89) holds.(iii) Show that in
Showing 1000 - 1100
of 3052
First
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
Last