All Matches
Solution Library
Expert Answer
Textbooks
Search Textbook questions, tutors and Books
Oops, something went wrong!
Change your search query and then try again
Toggle navigation
FREE Trial
S
Books
FREE
Tutors
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Hire a Tutor
AI Study Help
New
Search
Search
Sign In
Register
study help
business
principles of managerial statistics
Questions and Answers of
Principles Of Managerial Statistics
Consider the MA(1) process of Example 16.8. Derive an expression of the autovariance function, C(u) = E(Xt+uX∗t ), u ∈ N 0.
Suppose that Xt , t ∈ Z are i.i.d. and real-valued, whose components are also i.i.d. with mean 0, variance 1, and finite fourth moment. Show that this is a special case of the linear process
Prove part (ii) of Theorem 16.14. You may use the result of part (i), and similar arguments used in proving part (i) of the theorem.
Verify identities (16.68) and (16.69).
Derive (16.67) using similar arguments to those leading to (16.66). Also establish the following results [see below (16.67)]:(i) |tr(Σ−1 −k ) − tr(Σ−1)| ≤ 1;(ii) |tr(Vk) − ptr(Σ−1 ¯
Derive (16.63) and (16.64) using appropriate result(s) in §16.2.
This exercise involves some of the details in the sketches of the proof of Theorem 16.14 outlined below the theorem.(a) Show that cj = OP(n), j = 0, 1.(b) Verify the expression var(Δ|Z) =
Show that, in the special case of the LMM (16.50), the REML equations(12.9) are equivalent to (16.52) and (16.53).
Derive the results of Lemma 16.3.
Use Theorem 16.5 and (ii) of Corollary 16.2 to prove (16.44).
Regarding the OPR of §16.4, show that if p is fixed, and lim inf n→∞λmin(XX/n) >0 a.s., then, as n→∞, we have RX( ˆ β|β)−a→.s. 0.
Verify the second equality in (16.39). Also verify the bias-variance decomposition(16.40) (the first equality). Finally, verify expressions (16.41) whenˆ β is the LS estimator.
Show that, in the problem of testing hypothesis H0 : Σ = Ip, discussed in Section 16.3, the likelihood ratio can be expressed as (16.35), where A =n i=1(Xi − ¯X)(Xi − ¯ X) with ¯X = n−1n
This exercise is related to the phase-transition limits of (16.33).(a) Let ψ(x) = x{1+γ (x−1)−1}. Show that ψ(x) is strictly increasing for x > 1 +√γ .(b) Show that for the top eigenvalues
This exercise has to do with whether convergence of all moments, that is, (16.29), implies convergence in distribution.(a) Find a counterexample such that a sequence of pdf’s, fn, n = 1, 2, . .
Show that the covariance function in Theorem 16.8, (16.28), has an alternative expression:where ψ(x, y) = (σ 2 −2)qx,y +2{log(4−xy +qx,y)−log(4−xy −qx,y)}with qx,y = (4 − x2)(4 − y2).
Consider Example 16.4. Show that the likelihood-ratio statistic, −2 log(Λ), where Λ is the likelihood ratio (that is, the maximized likelihood function under the null hypothesis divided by the
Referring, again, to Theorem 16.7. Suppose that p = γn+cn1−δ, wherec, δare constants and δ > 0. Determine the constanta, ρ > 0 such that for any > 0, there is a constant b ∈ (0,∞)
Explain the centralization constant in Theorem 16.7, that is, μn,p of (16.19), in view of Theorem 16.5. Does it make sense? and why?
This exercise is related to the proof of Corollary 16.2. You may use the results of Corollary 16.1 and Theorem 16.5, as well as §2.7.12.(a) Prove (i) of Corollary 16.2, including deriving the
This exercise is associated with Theorem 16.3 regarding the LSD of the Fisher matrix. As in the previous exercise, you are asked to carry out a simulation study, and compare numerically the ESD of
This exercise has to do with the Marˇcenko–Pastur law of Theorem 16.2. You are asked to run a simulation study to numerically verify the result of the theorem. Consider n = 10 and n = 100, and p =
Consider the Wigner matrix Wp in Exercise 16.3. Let λ1, . . . , λp be the eigenvalues of Wp. Find a positive number,a, such that p−ap j=1 λj converges almost surely to a limit, and determine
Consider the sample covariance matrix Sn defined by (16.4), where X1, . . . , Xn are independent ∼ N(μ, Ip) and μ is a p-dimensional mean vector. Show that, when p is fixed, we have Sn−→P Ip
Referring to Example 16.3, verify that the second equation in (12.9) can be written as (16.8). Also verify that (16.7), (16.8) are equivalent to (16.9).
Referring to Example 6.1, show that Tn−a→.s. 0 and (n/p)1/2Tn−→d N(0, 2).
This exercise is related to Example 15.10.(i) Verify expression (15.42).
This exercise is related to the maximum correlation introduced in Section 15.5 [see (15.34)].(i) Verify the alternative expression (15.35).(ii) Derive the identity (15.36).
Recall the definition of F below (15.24). Show that F ≤ 1.
Continue with the previous exercise.(i) Verify expressions (15.26) and (15.27).(ii) Verify the identity (15.29).
This exercise is regarding the measure df defined by the square root of (15.25).(i) Show that df is not a distance [the definition of a distance is given by the requirements 1–4 below (6.63)].(ii)
Recall the open question asked at the end of Section 15.4 regarding the asymptotic distribution of MLE in GLMM with crossed random effects.Consider the very simple example given there with only one
Specify the right side of (15.23) for the mixed logistic model of Example 15.9. Also specify the importance weights wkl by replacing f {y|ψ(k)}by 1, which does not affect the result of the M-step.
Regarding the M-H algorithm described below (15.20), show that the acceptance probability of theM-H algorithm [i.e., (15.12)] can be expressed as (15.21).
Verify that under the Gaussian mixed model (15.15), the log-likelihood function can be expressed as (15.16), where c does not depend on the data or parameters. Also verify the expressions of the
This exercise is related to some of the details of the rejection sampling chain of Example 15.8 as a special case of the M-H algorithm.(i) Show that the rejection sampling scheme described above
Show that theM-H algorithm, described below (15.12), generates aMarkov chain Xt whose transition kernel is given by (15.10) with α defined by (15.12) if f (x)q(x, y) > 0, and α = 1 otherwise.
Show that (15.11) implies that f is the stationary distribution with respect to K.
This exercise provides a very simple special case of the Gibbs sampler that was considered by Casella and George (1992). Let the state-space be S = {0, 1}2. A probability distribution π defined over
Verify the expressions for the conditional density of X1 given X2 = x2 and X3 = x3, as well as the joint density of X1,X2 in Example 15.5.
Consider the bivariate-normal Gibbs sampler of Example 15.4.(i) Show that the marginal chain Xt has transition kernel(ii) Show that the stationary distribution of the chain Xt is N(0, 1).(iii) Show
Regarding Examples 15.2 and 15.3, show the following:(i) fX(x), or πX(B) =B fX(x) dx, is a stationary distribution for the Markov chain Xt .(ii) fY (y), or πY (B) =B fY (y) dy, is a stationary
This exercise is regarding the Markovian properties of the Gibbs sampler for the special case described above (15.5).(i) Show that Yt is a Markov chain and derive its transition kernel.(ii) Show that
Regarding rejection sampling, introduced in Section 15.1, show that the pdf of the drawn θ conditional on u ≤ p(θ)/bq(θ) is p(θ). (Hint: Use Bayes’theorem; see Appendix A.3).
Now, let us consider a discrete situation. Suppose that the observation y follows a Poisson distribution with mean θ. Furthermore, the prior for θ is proportional to exp(ν log θ − ηθ); that
This exercise is in every book, chapter, or section about Bayesian inference.Suppose that the distribution of y depends on a single parameter, θ. The conditional distribution of the observation y
Using the simple Monte Carlo method based on the SLLN [i.e., (15.2)]numerically evaluate ρw, ρs , and the asymptotic significance level in Table 11.2 for α = 0.05 for the case F = t3. Try n = 1000
This exercise is related to Example 14.11.(i) Simplify the expressions for g1(θ) and g2(θ).(ii) It is conjectured that g3 can be expressed as s+o(m−1∗ ), where s depends only on the second and
Consider a special case of the nested error regression model (14.85) with xijβ = μ and ni = k, 1 ≤ i ≤ m, where μ is an unknown mean and k ≥ 2. Suppose that the random effects vi are i.i.d.
Show that (14.80) and d2/n → 0 imply (14.81).
Show that, in Example 14.10, the coverage probability of ˜ I2,i is 1−α+o(1);that is, as n→∞, P(-za/2 i + Za/2A) = 1 +0(1).
This exercise is regarding the special case of block-bootstrapping the sample mean, discussed in Section 14.5.(i) Show that the influence function IF defined in (A2) [above (14.72)] is given by IF(x,
Interpret expression (14.72) of the asymptotic variance σ2.
Verify the two equivalent expressions of (14.69)—that is, (14.70)and (14.71).
This exercise has three parts.(i) Interpret the expression of the asymptotic variance τ 2 in (14.66) given below the equation.(ii) Show that (14.67) is equivalent tofor every x.(iii) Interpret
Is the plug-in principle used in the sieve bootstrap [see Section 14.5, below (14.64)] the same as Efron’s plug-in principle [see Section 14.5, below (14.63)]? Why?
Show that the coefficients φj, j = 0, 1, . . . , in (14.64) are functions of F, the joint distribution of the X’s and ’s. (You may impose some regularity conditions, if necessary.)
Regarding the plug-in principle of bootstrap summarized below (14.63), what are X, F,R(X, F) for bootstrapping under the dynamic model (14.60)?What are X∗, ˆ F, and R(X∗, ˆ F) in this case?
In this exercise you are encouraged to study the large-sample behavior of the bootstrap through some simulation studies. Two cases will be considered, as follows. In each case, consider n = 50, 100,
Continue with the previous exercise.(i) Give an example of a sequence an, n ≥ 0, of positive integers that is strictly increasing and satisfies (14.58) for every k ≥ 0.(ii) Show that for every k
This exercise is related to some of the details in Example 14.9.(i) Show that ξn = n{θ − X(n)}/θ converges weakly to Exponential(1).(ii) Show that for any k ≥ 1,(iii) Show that (14.45) and
Let X1, . . . , Xn be i.i.d. with cdf F and pdf f . For any 0 < p < 1, let νp be such that F(νp) = p. Suppose that f (νp) > 0. Show that for any sequence m = mn such that m/n = p + o(n−1/2), we
This exercise involves some of the details in Example 14.8.(i) Show that the functional θ is a special case of (14.48).(ii) Verify that θ(Fn) = n−2n i=1n j=1 1(Xi+Xj>0), which is a V
Show that, with the functional h defined by (14.48), the derivative ψ is given by (14.49). [Hint: Use (14.47). You do not need to justify it rigorously.]
Regarding Remark 1 below (14.32), show the following:(i) That the M-estimators ˆ ψ−i , 0 ≤ i ≤ m, are c.u. μω at rate m−d implies that they are c.u. at rate m−d .(ii) Conversely, if
Prove Theorem 14.2. [Hint: Consider a neighborhood of ψ and show that the values of the function l at the boundary of the neighborhood are greater than that at the center with high probability.]
Regarding Example 14.7, show that the REML estimator of ψ satisfies(14.29) (according to the definition under non-Gaussian linear mixed models; see Section 12.2), where the fj ’s are the same as
Regarding Example 14.6, show that the MLE of ψ satisfies (14.29) (under regularity conditions that you may need to specify) with a(ψ) = 0 and fj(ψ, yj ), 1 ≤ j ≤ p + q, specified in the
This exercise involves some details in Example 14.5.(i) Show that MSPE( ˜ζ) = 1 − B.(ii) Show that MSPE( ˆζ) = 1 − B + 2B/m + o(m−1).(iii) Show that E( ˆ ζ∗−1− ˆζ)2 = A(1 − B) +
Show that, in Example 14.4 [continued following (14.22)], the MSPE of ˜ ζ is equal to σ2 v σ2 e /(σ 2 e+ niσ2 v ), which is the same as var(ζ |yi ).
This exercise involves some further details regarding the outline of the proof of Theorem 14.1.(i) Show that ws ≤ 1, s ∈ Sr .(ii) Use the technique of unspecified c (see Section 3.5) to show that
Regarding the outline of the proof of Theorem 14.1 (near the end of Section 14.2), show the following: n n (2-1) (-)-(9) d-1
Show that when d = 1, (14.12) reduces to (14.16), which is the weighted delete-1 jackknife estimator of Var( ˆ β).
Verify the representation (14.13) and show that ˆ βij is the OLS estimator ofβ based on the following regression model:yi = α + βxi + ei , yj = α + βxj + ej .
Consider the example of sample mean discussed below (14.1). Showed that, in this case, the right side of (14.8) is equal to s2/n, where s2 =(n − 1)−1n i=1(Xi − ¯ X)2 is the sample variance.
Continue with the previous exercise. Suppose that Xi has the Uniform[0, 1]distribution. Show that X(i) ∼ Beta(i, n − i + 1), 1 ≤ i ≤ n, and therefore obtain the mean and variance of R for n =
Let X1, . . . , Xn be an i.i.d. sample from a distribution with cdf F and pdf f , and X(1) Using these results, derive the cdf and pdf of R in Example 14.2, assuming n = 2m − 1 for simplicity.
Show that the estimator given by (13.104) is design-unbiased for μ2 i ; that is, Ed(ˆμ2 i ) = μ2 i . [Hint: Use the index set Ii ; see a derivation below (13.103).]
Derive the expression of MSPE (13.100) in the case that A is unknown. [Hint:First, note that ˜ ζM = y − R(y − Xβ). Also note that E(eRy) = E{eR(μ +v + e)} = E(eRe) = tr(RD).]
Consider Example 13.4.(i) Recall that the best model in terms of BP is the one that maximizes C1(M), defined below (13.95). Show thatThus, the best model under BP is M1, because it has the same
Continue with the previous exercise. Show that if there is a true model M ∈M, then a true model with minimal p is the optimal model under both the GIC and BP.
This exercise involves some details in Section 13.5.(i) Verify that the likelihood function under M is given by (13.89).(ii) Verify (13.93).(iii) Show that the BP of ζ under M, in the sense of
This exercise is related to the arguments in Section 13.4 that show d2 →∞.(i) Show that (13.84) holds for some constant c > 0 and determine this constant.(ii) Show that WBW2 ≤ (max1≤i≤m
Establish inequality (13.83).
Establish the following inequalities.(i) V−1X(XV−1X)−1XV−122≤ V−12(p + 1).(ii) ZV−1X(XV−1X)−1XV−1Z22≤ ZV−1Z2(p + 1).
This exercise is related to the quadratic spline with two knots in Example 13.1.i) Plot the quadratic spline.(ii) Show that the function is smooth in that it has a continuous derivative on [0, 3],
Show that the minimizer of (13.71) is the same as the best linear unbiased estimator (BLUE) for β and the best linear unbiased predictor (BLUP) forγ in the linear mixed model y = Xβ + Zγ + ,
Show that, under the hierarchical Bayes model near the end of Section 13.3, the conditional distribution of θi given A and y is normal with mean equal to the right side of (13.31) with ˆA replaced
Verify (13.60); that is, the expression of (13.59) when ˆ θi is replaced by yi .
This exercise involves some calculus derivations.(i) Verify expressions (13.55) and (13.56).(ii) Verify expression (13.57).(iii) Obtain an expression for ∂ ˆAFH/∂yi . You man use the well-known
Show that, in the balanced case (i.e., Di = D, 1 ≤ i ≤ m), the P-R, REML, and F-H estimators of A in the Fay–Herriot model are identical (provided that the estimator is nonnegative), whereas
Show that the right side of (13.40) ≤ the right side of (13.41) ≤ the right side (13.39) for any A ≥ 0.
Show that the estimators ˆAPR, ˆAML, ˆARE, and ˆAFH in Section 13.3 possess the following properties: (i) They are even functions of the data—that is, the estimators are unchanged when y is
Here is a more challenging exercise that the previous one: Show that the Fay–Herriot estimator ˆAFH, defined as the solution to (13.33), is√m-consistent.
Show that the estimator ˆAPR defined by (13.32) is√m-consistent.
Show that (13.33) is unbiased in the sense that the expectation of the left side is equal to the right side if A is the true variance of the random effects. Also verify (13.34).
Show, by formal derivation, that the estimator (13.30) satisfies (13.22). [Hint:You may use the fact that E{ci ( ˆ θ) − ci(θ)} = o(1) and E{Bi ( ˆ θ) − Bi(θ)} =o(1).]
Verify expression (13.11), where ˜αi = E(αi |y) has expression (13.4).
Verify (13.8).
Show that the last term on the right side of (13.7) has the order OP(n−1/2 i ).You may recall the argument of showing a similar property of the MLE (see Section 4.7).
Showing 1 - 100
of 3052
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Last