All Matches
Solution Library
Expert Answer
Textbooks
Search Textbook questions, tutors and Books
Oops, something went wrong!
Change your search query and then try again
Toggle navigation
FREE Trial
S
Books
FREE
Tutors
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Hire a Tutor
AI Study Help
New
Search
Search
Sign In
Register
study help
business
principles of managerial statistics
Questions and Answers of
Principles Of Managerial Statistics
Verify the limiting behavior (iv) below (13.4) and also (13.5). [Hint: The following formulas might be useful. For 1 ≤ k ≤ n − 1, we have So 00 xk-1 (k-1)!(n-k-1)! dx = (1+x)" (n - 1)! xk-1
Verify the limiting behaviors (i)–(iii) below (13.4).
This exercise is associated with the mixed logistic model of (13.1).(i) Show that E(αi |y) = E(αi |yi ), where yi = (yij )1≤j≤ni .(ii) Verify (13.3) for ˜αi = E(αi |yi ).
This exercise is related to Example 12.13, where two subsets of independent data are considered. The first subset is similar to that used in the proof of Theorem 12.3; the second subset corresponds
This exercise is related to Example 2.9.(i) Derive a detailed expression of zi(yi, ϑ).(ii) Verify the expression for the covariance matrix of zi(yi, ϑ), that is, Varϑ {zi(yi, ϑ)} = diag{xixi/(A
Show that the right side of (12.78) is O(m). In fact, in this case, it can be shown that the order is O(1) when M is true [Hint: The most challenging part is to evaluatei,j a2 ij= tr(A2). Note that
This exercise involves some details regarding Example 12.16.(i) Show that PXf= m−1Jm ⊗ I2.(ii) Show that PX = (2m)−1Jm ⊗ J2 for X corresponding to model I and PX = m−1Jm ⊗ (δ2δ2) for X
This exercise involves some of the details in the derivations following(12.70).(i) Show that (PXf− PX)X = 0.(ii) Show that PXf− PX is idempotent; that is, (PXf− PX)2 = PXf− PX.
Show that the RSS measure defined in Example 12.15 is a measure of lackof-fit according to the definition above Example 12.14.
Show that the negative log-likelihood measure defined in Example 12.14 is a measure of lack-of-fit according to the definition above Example 12.14.
Consider a special case of Example 12.14 with ni = k, 1 ≤ i ≤ m, where k ≥ 2. Show that in this case, the estimating equation of Jiang (1998a), which is (12.63) with B = diag(1, 1m), is
Show that for any matrices B, U, and V such thatV > 0 (positive definite), U is full rank, and BU is square and nonsingular, we have(i.e., the difference of the two sides is nonnegative definite).
Show that under a GLMM, E(y2 i ) depends on φ, the disperson parameter in (12.4), but E(yiyi ) does not depend on φ if i = i.
This exercise is related to Example 12.11.(i) Show that (12.53) has a unique solution for u when everything else is fixed.(ii) Show that the PQL estimator of β satisfies (12.54).(iii) Derive
Verify (12.49) and obtain an expression for r. Show that r has expectation 0.
Show that the likelihood function in Example 12.10 for estimating ψ can be expressed as (12.46).
Show that the number of coefficents ast in the bivariate polynomial (12.32)is NM = 1 + M(M + 3)/2.
This exercise is regarding the projection method that begins with the identity (12.28).(i) Show that E(m1 i=1 m2 j=1 δ1,ijk)2 = O(m1m2). [Hint: Note that, given u and v, the δ1,ijk’s are
The additional assumption in part (ii) of Theorem 12.1 that the random effects and errors are nondegenerate is necessary for the asymptotic normality of the REML estimators. To see a simple example,
Show that the components of an in (12.15) [defined below (12.12)] are quadratic forms of the random effects and errors.
This exercise is concerned with some of the arguments used in the proof of Theorem 12.1.(i) Show that (12.14) holds under (12.10)–(12.12), where θn is defined by (12.13) and rn is uniformly
Consider the following linear mixed models: yij k = βi + αj + γij + ij k, i = 1, . . . ,a, j = 1, . . . ,b, k = 1, . . . ,c, where βi ’s are fixed effects, theαj ’s are random effects, the
Show that the REML estimators of the variance components (see Section 12.2) do not depend on the choice of A. This means that if B is another n×(n−p) matrix of full rank such that BX = 0, the
Show that, in Example 12.5, theMLE of σ2 is given by (12.7), and the REML estimator of σ2 is given by (12.8).
Suppose that, given a vector of random effects, α, observations y1, . . . , yn are (conditionally) independent such that yi ∼ N(xiβ + ziα, τ 2), where xi and zi are known vectors, β is an
Give an example of a special case of the longitudinal model that is not a special case of the mixed ANOVA model.
(Two-way random effects model). For simplicity, let us consider the case of one observation per cell. In this case, the observations yij , i = 1, . . . , m, j = 1, . . . , k, satisfy yij = μ+ui +vj
Show that, in Example 12.1, the correlation between any two observations from the same individual is σ2/(σ 2 + τ 2), whereas observations from different individuals are uncorrelated.
This exercise is related to Example 11.11 at the end of the chapter.(i) Verify the calculations of ˆσ 2, ˆh0, γˆ3, γˆ4, and ˆhin the example.(ii) Simulate a larger data set, say, with n =
Regarding the parameter θ2 defined below (11.91), show that θ2 =3/8√πσ5 if f is the pdf of N(μ, σ2).
(i) Verify (11.88).(ii) Show that the right side of (11.89) is minimized when h is given by (11.90).
Give a proof of Theorem 11.6. As mentioned, the proof is based on the Taylor expansion. The details can be found in Lehmann’s book but you are encouraged to explore without looking at the book (or
This exercise is related to the expression of the histogram (see Section 11.6).(i) Show that (11.82) holds provided that F is twice continuously differentiable.(ii) Show that the limit (11.81) is
Consider the U-statistic associated with Wilcoxon two-sample test (see the discussion at the end of Section 11.5).(i) Verify (11.80).(ii) Show that under the null hypothesis F = G, we have σ10 =
This exercise involves some details regarding the proof of Theorem 11.4 at the end of Section 11.5.(i) Show that both var(ζ∗N) and cov(ζN, ζ∗N) converge to the right side of (11.78) with a = b
Consider once again the problem of testing for the center of symmetry. More specifically, refer to the continuing discussion near the end of Section 11.5.(i) Verify (11.70) for 1 ≤ j ≤ k ≤ 3
Verify the numerical inequality (11.75) for any x, y, a ≥ 0.
Show that (11.68) holds as n→∞.
Verify the following.(i) The martingale property (11.61).(ii) The expression (11.57), considered as a sequence of random variables, Un, Fn = σ(X1, . . . , Xn), n ≥ m, is a martingale.(iii) The
This exercise is concerned with moment properties of Snc, 1 ≤ c ≤ m, that are involved in the Hoeffding representation (11.57).(i) Show that E(Snc) = 0, 1 ≤ c ≤ m.(ii) Show thatexcept that c
Verify the property of complete degeneracy (11.56).
Verify (11.53) and also show that θ(F) = var(X1) for the sample variance.
Verify the identity (11.50).
Show that the functional h defined below (11.45) is continuous on (D, · ).
Verify (11.42) and thus, in particular, (11.43) and (11.44) under the null hypothesis (11.39).
Continue with the previous exercise.(i) Verify the identity (11.38).(ii) Given that (11.36) holds with TN = WXY /mn, μ(θ) = Pθ (X < Y ) ={1 − F(x − θ)}f (x) dx, and τ(0) = 1/√12ρ(1 −
(i) Show that (11.37) holds under the limiting process of (i) of the previous exercise, provided that θN is the true θ for TN. You may use a similar argument as in Example 11.4 and the following
Consider the pooled sample variance, S2 p , of Example 11.5.(i) Show that S2 p−→P σ2 as m, n → ∞such that m/N → ρ ∈ (0, 1), where N = m + n and σ2 is the variance of F.(ii) Show that
In the case of testing for the center of symmetry, suppose that Xni , 1 ≤ i ≤ n, are independent observations with the cdf F(x − θn). Then for the t -test, we have Tn = ¯Xn = n−1n i=1 Xni
Verify that the ARE eW,t of (11.31) is given by (11.34) in the case of Example 11.4.
Verify that the ARE eS,t of (11.30) is given by (11.33) in the case of Example 11.3.
Evaluate the AREs (11.30)–(11.32) when F is the following distribution:(i) Double Exponential with pdf f (z) = (1/2σ)e−|x|/σ , −∞ < x < ∞, whereσ > 0.(ii) Logistic with pdf f (z) =
Show that for the problem of testing for the center of symmetry discussed in Sections 11.2 and 11.3, the efficacies of the t , sign, and Wilcoxon signedrank tests are given by 1/σ, 2f (0), and
This exercise has several parts.(i) Suppose that X has a continuous distribution F. Show that F(X) has the Uniform[0, 1] distribution [Hint: Use (7.4) and the facts that F(x) ≥ u if and only if x
Show that (11.22) holds as n→∞if θ is the true center of symmetry.
This exercise is to show that both sides of inequality (11.19) are sharp in that there are distributions F that are continuous and symmetric about zero for which the left- or right-side equalities
Show that the asymptotic correlation coefficient between S2 and S3 in (11.18), which correspond to the test statistics of the signed-rank and sign tests, is equal to√3/2.
Verify (11.6).
Verify (11.13)–(11.15).
Verify (11.6)–(11.8).
Verify that for the diffusion process in Example 10.16, the density function(10.57) reduces to e−2|x−μ|.
Show that the process W(a), a ≥ 0, defined below (10.54) is a Brownian motion (Hint: Use Lemmas 10.1 and 10.2).
This exercise is related to the heat equation (Example 10.15).(i) Verify that the pdf of N(x, t),satisfies the heat equation (10.48).(ii) Verify that the function u(t, x) defined by (10.49) satisfies
Verify that τa defined by (10.42) is a stopping time (whose definition is given above Lemma 10.2).
Let B(t), t ≥ 0, be Brownian motion and a < 0
This exercise is associated with Example 10.11.(i) Verify (10.36) and (10.37).(ii) Show, by using Lindeberg–Feller’s theorem (Theorem 6.11; use the extended version following that theorem), that
Show that the Brownian bridge U(t), 0 ≤ t ≤ 1 (defined below Theorem 10.15), is a Gaussian process with mean 0 and covariances cov{U(s),U(t)} = s(1 − t), s ≤ t .
Prove the SLLN for Brownian motion (Theorem 10.14). [Hint: First, show the sequence B(n)/n, n = 1, 2, . . . , converges to zero almost surely as n →∞; then show that B(t) does not oscillate too
Let B(t), t ≥ 0, be a Brownian motion. Show that B2(t) − t, t ≥ 0, is a continuous martingale in that for any s < t, E{B2(t) − t |B(u), u ≤ s} = B2(s) − s.
Let B(t), t ≥ 0, be a standard Brownian motion. Show that each of the following is a standard Brownian motion:(1) (Scaling relation) a−1B(a2t), t ≥ 0, where a = 0 is fixed.(2) (Time inversion)
In this exercise you are encouraged to give a proof of the reflection principle of Brownian motion (Theorem 10.13).(i) Show that if X, Y , and Z are random vectors such that (a) X and Y are
This exercise shows how to justify assumption (ii) given assumption (i) of Brownian motion (see Section 10.5). Suppose that assumption (i) holds such that E{B(t) − B(s)} = 0, E{B(t) − B(s)}2 <
Derive (10.33) by Fubini’s theorem (see Appendix A.2).
This exercise is related to the proof of Theorem 10.10.(i) Verify (10.29).(ii) Show that ut →−x as t →∞.(iii) Derive (10.31).
Let N(t) be a renewal process. Show that N(t) + 1 is a stopping time with respect to the σ-fields Fn = σ(X1, . . . , Xn), n ≥ 1 [see Section 8.2, above (8.5), for the definition of a stopping
Show that the renewal function has the following expression:where Fn(·) is the cdf of Sn. m(1) - F(t).
Let U be a random variable that has the Uniform(0, 1) distribution. Defineξn = n1(U≤n−1), n ≥ 1.(i) Show that ξn−a→.s. 0 as n→∞.(ii) Show that E(ξn) = 1 for every n, and therefore
Give a proof of Theorem 10.7. [Hint: Note that SN(t) ≤ t < SN(t)+1; then use the result of Theorem 10.6.]
Compare the distribution of a Poisson process N(t) with rate λ = 1 with the approximating normal distribution. According to Theorem 10.4, we have{N(t) − t}/√t−→d N(0, 1) as t → ∞.
Two balanced dice are rolled 36 times. Each time the probability of “double six” (i.e., six on each die) is 1/36. Consider this as a situation of the Poisson approximation to binomial. The
Derive (10.25) and (10.26). Also obtain the corresponding results for a Poisson process using Theorem 10.5.
Prove Theorem 10.5. [Hint: First derive an expression for P{ti ≤ Si ≤ ti +hi , 1 ≤ i ≤ n|N(t) = n}; then let hi → 0, 1 ≤ i ≤ n.]
Show that the right side of (10.23) converges to e−λt (λt)x/x! for x =0, 1, . . . .
For the third definition of a Poisson process, derive the pdf of Sn, the waiting time until the nth event. To what family of distribution does the pdf belong?
Consider a birth and death chain with two reflecting barriers (i.e., the state space is {0, 1, . . . , l}); the transition probabilities are given as in Example 10.4 for 1 ≤ i ≤ l − 1; q0 = r0
This exercise is related to the birth and death chain of Example 10.4.(i) Show that the chain is irreducible if pi > 0, i ≥ 0, and qi > 0, i ≥ 1.(ii) Show that the chain is aperiodic if ri > 0
Consider a Markov chain with states 0, 1, 2, . . . such that p(i, i + 1) = pi and p(i, i − 1) = 1 − pi , where p0 = 1. Find the necessary and sufficient condition on the pi ’s for the chain to
Show that positive recurrency implies recurrency. Also show that positive(null) recurrency is a class property.
Show that if state j is transient, then (10.18) holds for all i. [Hint: By the note following (10.18), the left side of (10.18) is equal to the expected number of visits to j when the chain starts in
Derive the approximation (10.16) using Stirling’s approximation (see Example 3.4).
Show that in Example 10.2, the Markov chain is aperiodic if and only if a0 = 0. Also show that in the special case of simple random walk with 0 < p < 1, we have d(i) = 2 for all i ∈ S.
In Example 10.2, if a−1 = a1 = 0 but a−2 and a2 are nonzero, what states communicate? What if a1 = 0 but a−1 = 0?
Show that if i is recurrent and i ↔ j , then j is recurrent.
Show that i ↔ j implies d(i) = d(j).
Show that any two classes of states (see Section 10.2) are either disjoint or identical.
This exercise is related to Example 10.1 (continued in Section 10.2).(i) Show that the one- and two-step transition probabilities of the Markov chain are given by (10.9) and (10.10), respectively.
Show that the process Xn, n ≥ 1, in Example 10.2 is a Markov chain with transition probability p(i, j) = aj−i , i, j ∈ S = {0,±1, . . . }.
Derive the Chapman–Kolmogorov identity (10.7).
Show that the finite-dimensional distributions of a Markov chain Xn, n =0, 1, 2, . . . , are determined by its transition probability p(·, ·) and initial distribution p0(·).
Show that (10.3) implies the Markov property (10.4).
Showing 100 - 200
of 3052
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Last