All Matches
Solution Library
Expert Answer
Textbooks
Search Textbook questions, tutors and Books
Oops, something went wrong!
Change your search query and then try again
Toggle navigation
FREE Trial
S
Books
FREE
Tutors
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Hire a Tutor
AI Study Help
New
Search
Search
Sign In
Register
study help
business
theory of probability
Questions and Answers of
Theory Of Probability
1.36 Let X1,...,Xn be iid with distribution Pθ , and suppose δn is UMVU for estimating g(θ) on the basis of X1,...,Xn. If there exists n0 and an unbiased estimatorδ0(X1,...,Xn0 ) which has finite
1.35 Determine the limit behavior of the estimator (2.3.22) as n → ∞.[Hint: Consider first the distribution of log δ(T ).]
1.34 Let X1,...,Xn be iid as N(ξ , 1). Determine the limit behavior of the distribution of the UMVU estimator of p = P[|Xi| ≤ u].
1.33 Let X have the binomial distribution b(p, n), and let g(p) = pq. The UMVU estimator of g(p) is δ = X(n−X)/n(n−1). Determine the limit distribution of √n(δ −pq) and n(δ − pq) when
1.32 In Example 8.13, let δ4n = max(0, X¯ 2 − σ2/n), which is an improvement over δ1n.(a) Show that √n(δ4n −θ 2) has the same limit distribution as √n(δ1n −θ 2) when θ = 0.(b)
1.31 In Example 8.13 with θ = 0, show that δ2n is not exactly distributed as σ2(χ2 1 −1)/n.
1.30 Fill in the details of the proof of Theorem 1.9. (See also Problem 1.8.8.)
1.29 Use the t-distribution to find the value of Pn(0, σ) in the preceding problem for the UMVU estimator of ξ 2 when σ is unknown for representative values of n.
1.28 On the basis of a sample from N(ξ,σ2), let Pn(ξ,σ) be the probability that the UMVU estimator X¯ 2 − σ2/n of ξ 2 (σ known) is negative.(a) Show that Pn(ξ,σ) is a decreasing function
1.27 (a) Under the assumptions of Theorem 1.5, if all fourth moments of the Xiν are finite, show that E(X¯i −ξi)(X¯ j −ξj ) = σij /n and that all third and fourth moments E(X¯i −
1.26 For the estimators of Example 1.13:(a) Calculate their exact variances.(b) Use the result of part (a) to verify (1.27).
1.25 For the situation of Example 1.12, show that the UMVU estimator δ1n is the biascorrected MLE, where the MLE is δ3n.
1.24 Find the variance of the estimator (2.3.17) up to terms of the order 1/n3.
1.23 Calculate the variance (1.18) to terms of order 1/n2 and compare it with the expected squared error of the MLE carried to the same order.
1.22 For the estimands of Problem 1.4, calculate the expected squared error of the MLE to terms of order 1/n2, and compare it with the variance calculated in Problem 1.21.
1.21 Carry the calculation of Problem 1.4 to terms of order 1/n2.
1.19 If the X’s are as in Theorem 1.1 and if the first five derivatives of h exist and the fifth derivative is bounded, show that E[h(X¯ )] = h(ξ ) +1 2h σ2 n +1 24n2 [4hµ3 + 3h(iv)σ4] +
1.18 Apply the results of Problem 1.17 to obtain approximate answers to Problems 1.15 and 1.16, and compare the answers with the exact solutions.
1.17 Let X1,...,Xn be iid according to U(0, θ), let T = max(X1,...,Xn), and let h be a function satisfying the conditions of Theorem 1.1. Show that E[h(T )] = h(θ) − θn h(θ) +1 n2 [θh(θ) +
1.16 Under the assumptions of Problem 1.15, find the MLE of θ k and compare its expected squared error with the variance of the UMVU estimator.
1.15 Let X1,...,Xn be iid according to U(0, θ). Determine the variance of the UMVU estimator of θ k , where k is an integer, k > −n.
1.14 Let X1,...,Xn be iid from the exponential distribution with density(1/θ)e−x/θ , x > 0, and θ > 0.(a) Use Theorem 1.1 to find approximations to E(√X¯ ) and var(√X¯ ).(b) Verify the
1.13 To see that Theorem 1.1 is not necessarily valid without boundedness of the fourth(or some higher) derivative, suppose that the X’s are distributed as N(ξ,σ2) and let h(X) = ex4. Then, all
1.12 Obtain a variant of Theorem 1.1, which requires existence and boundedness of only h instead of h(iv), but where Rn is only O(n−3/2).[Hint: Carry the expansion (1.6) only to the second
1.11 Under the assumptions of Problem 1.1, show that E|X¯ −ξ |2k−1 = O(n−k+1/2). [Hint:Use the fact that E|X¯ − ξ |2k−1 ≤ [E(X¯ − ξ )4k−2]1/2 together with the result of Problem
1.10 Solve part (b) of the preceding problem for the estimator (2.3.22).
1.9 Let X1,...,Xn be iid as Poisson P(θ).(a) Determine the UMVU estimator of P(Xi = 0) = e−θ .(b) Calculate the variance of the estimator of (a) up to terms of order 1/n.[Hint: Write the
1.8 Solve the preceding problem if pm is replaced by the estimand of Problem 2.3.3.
1.7 For estimating pm in Example 3.3.1, determine, up to order 1/n,(a) the variance of the UMVU estimator (2.3.2);(b) the bias of the MLE.
1.6 Solve the preceding problem for the case that ξ is unknown.
1.5 Let X1,...,Xn be iid as N(ξ,σ2), ξ known. For even r, determine the variance of the UMVU estimator (2.2.4) of σr up to terms of order r.
1.4 LetX1,...,Xn be iid asN(ξ,σ2), σ2 known, and let g(ξ ) = ξ r,r = 2, 3, 4. Determine, up to terms of order 1/n,(a) the variance of the UMVU estimator of g(ξ );(b) the bias of the MLE of g(ξ
1.3 Prove Theorem 1.5.
1.2 For fixed n, describe the relative error in Example 1.3 as a function of p.
1.1 Let X1,...,Xn be iid with E(Xi) = ξ .(a) If the Xis have a finite fourth moment, establish (1.3)(b) For k a positive integer, show that E(X¯ − ξ )2k−1 and E(X¯ − ξ )2k , if they exist,
7.32 Efron (1990), in a discussion of Brown’s (1990a) ancillarity paradox, proposed an alternate version.Suppose X ∼ Nr(µ, I ), r > 2, and with probability 1/r, independent of X, the value of
7.31 Brown’s ancillarity paradox. Let X ∼ Nr(µ, I ), r > 2, and consider the estimation of wµ = r i=1wiµi, where w is a known vector with w2 i > 0, using loss function L(µ, d)=(wµ −
7.30 For X ∼ Nr(θ , I ), consider estimation of ϕθ where ϕr×1 is known, using the estimator aX with loss function L(ϕθ , δ)=(ϕθ − δ)2.(a) Show that if a lies outside the sphere
7.29 Suppose we observe X1, X2,... sequentially, where Xi ∼ fi(x|θi). An estimator of θ j = (θ1, θ2,...,θj ) is called nonanticipative (Gutmann 1982b) if it only depends on (X1, X2,...,Xj ).
7.28 For i = 1, 2,...,k, let Xi ∼ fi(x|θi) and suppose that δ∗i (xi) is a unique Bayes estimator of θi under the loss Li(θi, δ), where Li satisfies Li(a,a) = 0 and Li(a, a) >0, a = a.
7.27 Fill in the gaps in the proof that estimators δπ of the form (7.27) are a complete class.(a) Show that δπ is admissible when r = −1, s = n + 1, and r +1= s.(b) For any other estimator
7.26 For the situation of Example 7.23:(a) Show that X/n and (n/n + 1) (X/n) (1 − X/n) are admissible for estimating p and p(1 − p), respectively.
7.25 Theorem 7.17 also applies to the Poisson(λ) case, where Johnstone (1984) obtained the following characterization of admissible estimators for the loss L(λ, δ) = r i=1(λi −δi)2/λi.A
7.24 (a) Verify the Laplace approximation of (7.23).(b) Show that, for h(|x|) = k/|x|2α, (7.25) can be written as (7.26) and that α = 1 is needed for an estimator to be both admissible and minimax.
7.23 Establish conditions for the admissibility of Strawderman’s estimator (Example 5.6)(a) using Theorem 7.19,(b) using the results of Brown (1971), given in Example 7.21.(c) Give conditions under
7.22 Verify that the conditions of Theorem 7.19 are satisfied for g(θ )=1/|θ |k if (a)k>r − 2 and (b) k = r − 2.
7.21 In Example 7.20, if g(θ )=1/|θ |k is a proper prior, then δg is admissible. For what values of k is this the case?
7.20 For the situation of Example 7.20:(a) Using integration by parts, show that
7.19 Brown and Hwang (1982) actually prove Theorem 7.19 for the case f (x|θ ) =eθ x−ψ(θ ), where we are interested in estimating τ (θ ) = Eθ (X) = ∇ψ(θ ) under the loss L(θ , δ) = |τ
7.18 This problem will outline the argument needed to prove Theorem 7.19:(a) Show that ∇mg(x) = m∇g(x), that is,∇g(θ )e−|x−θ |2 dθ =[∇g(θ )] e−|x−θ |2 dθ .(b) Using part (a),
7.18, show that r(π, δg) = r − 2[∇ log mπ (x)][∇ log mg(x)]mπ (x) dx+|∇ log mg(x)|2 mπ (x) dx, which implies r(π, δπ ) = r −|∇ log mπ (x)|2 mπ (x) dx, and hence deduce (7.14).
7.17 The identity (7.14) can be established in another way. For the situation of Example
7.16 (i) Show that, in general, if δπ is the Bayes estimator under squared error loss, then r(π, δπ ) − r(π, δg) = E |δπ (X) − δg(X)|2 , thus establishing (7.13).(ii) Prove (7.15).(iii)
7.15 Use Blyth’s method to establish admissibility in the following situations.(a) If X ∼ Gamma(α, β), α known, then x/α is an admissible estimator of β using the loss function L(β, δ)=(β
7.14 Let X ∼ Poisson(λ). Use Blyth’s method to show that δ0 = X is an admissible estimator of λ under the loss function L(λ, δ)=(λ − δ)2 with the following steps:(a) Show that the
7.13 Fill in some of the gaps in Example 7.14:(i) Verify the expressions for the posterior expected losses of δ0 and δπ in (7.7).(ii) Show that the normalized beta priors will not satisfy
7.12 Prove the following (equivalent) version of Blyth’s Method (Theorem 7.13).Theorem 8.7 Suppose that the parameter space ∈ r is open, and estimators with continuous risks are a complete
7.11 For X ∼ f (x|θ) and loss function L(θ,δ) = r i=1 θm i (θi − δi)2, show that condition(iii) of Theorem 7.11 holds.
7.10 Referring to Theorem 7.11, this problem shows that the assumption of continuity of f (x|θ) in θ cannot be relaxed. Consider the density f (x|θ) that is N(θ , 1) if θ ≤ 0 and N(θ + 1, 1)
7.9 A family of functions F is equicontinuous at the point x0 if, given ε > 0, there existsδ such that |f (x) − f (x0)| < ε for all |x − x0| < δ and all f ∈ F. (The same δ works for all f
7.8 Referring to Theorem 8.5, show that condition (iii) is satisfied by(a) the exponential family,(b) continuous densities in which θ is a one-dimensional location or scale parameter.
7.7 Prove the following theorem, which gives sufficient conditions for estimators to have continuous risk functions.Theorem 8.5 (Ferguson 1967, Theorem 3.7.1) Consider the estimation of θ with loss
7.6 Show that, in the following estimation problems, all risk functions are continuous.(a) Estimate θ with L(θ,δ(x)) = [θ − δ(x)]2, X ∼ N(θ , 1).(b) Estimate θ with L(θ,δ(x)) = |θ −
7.5 A decision problem is monotone (as defined by Karlin and Rubin 1956; see also Brown, Cohen and Strawderman 1976 and Berger 1985, Section 8.4) if the loss function L(θ,δ) is, for each θ,
7.4 For the situation of Example 7.8, show that if δ0 is any estimator of θ, then the class of all estimators with δ(x) < δ0(x) for some x is complete.
7.3 Fill in the details of the proof of Lemma 7.5.
7.2 Efron and Morris (1973a) give the following derivation of the positive-part Stein estimator as a truncated Bayes estimator. For X ∼ Nr(θ , σ2I ),r ≥ 3, and θ ∼ N(0, τ 2I ), where σ2 is
7.1 Establish the claim made in Example 7.2. Let X1 and X2 be independent random variables, Xi ∼ N(θi, 1), and let L((θ1, θ2), δ)=(θ1 − δ)2. Show that δ = sign(X2) is an admissible
6.20 In Example 6.12, we saw improved estimators for the success probability of negative binomial distributions. Similar results hold for estimating the means of the negative binomial distributions,
6.19 For the situation of Example 6.12:(a) Show that the estimator δ0(x) + g(x), for g(x) of (6.45) dominates δ0 in risk under the loss L−1(θ , δ) of (6.38) by establishing that D(x) ≤ 0.(b)
6.18 For the situation of Example 6.11:(a) Establish that x + g(x), where g(x) is given by (6.42), satisfies D(x) ≤ 0 for the loss L0(θ , δ) of (6.38), and hence dominates x in risk.(b) Derive
6.17 (a) Prove Lemma 6.9. [Hint: Change variables from x to x − ei, and note that hi must be defined so that δ0(0) = 0.](b) Prove that for X ∼ pi(x|θ), where pi(x|θ) is given by (6.36), δ0(x)
6.16 Let Xi ∼ binomial(p, ni), i = 1,...,r, where ni are unknown and p is known. The estimation target is n = (n1,...,nr) with loss function L(n, δ) = r i=1 1ni(ni − δi)2.
6.15 For Xi ∼ Poisson(λi) i = 1,...,r, independent, and loss function L(λ, δ) =(λi − δi)2/λi:(a) For what values ofa, α, and β are the estimators of (4.6.29) minimax? Are they also
6.13 Prove Lemma 6.2.6.14 For the situation of Example 6.7:(a) Verify that the estimator (6.25) is minimax if 0 ≤ c ≤ 2. (Theorem 5.5 will apply.)
6.12 For the situation of Example 6.5:(a) Show that Eσ 1σ2 = E0 r−2|X|2 .(b) If 1/σ2 ∼ χ2ν /ν, then f (|x − θ |) of (6.19) is the multivariate t-distribution, with νdegrees of freedom
6.11 Prove the following extension of Theorem 5.5 to the case of unknown variance, due to Strawderman (1973).Theorem 8.4 Let X ∼ Nr(θ , σ2I ) and let S2/σ2 ∼ χ2ν , independent of X. The
6.10 The positive-part Lindley estimator of Problem 6.9 has an interesting interpretation in the one-way analysis of variance, in particular with respect to the usual test performed, that of H0 : θ1
6.8 For the situation of Example 6.4, the analogous modification of the Lindley estimator(6.1) isδL = x¯1 +1 − r − 3(xi − ¯x)2/σˆ 2(x − ¯x1), where σˆ 2 = S2/(ν + 2) and S2/σ2
6.7 In Example 6.4:(a) Verify the risk function (6.13).(b) Verify that for unknown σ2, the risk function of the estimator (6.14) is given by(6.15).(c) Show that the minimum risk of the estimator
6.6 The Green and Strawderman (1991) estimator δc(x, y) can be derived as an empirical Bayes estimator.(a) For X|θ ∼ Nr(θ , σ2I ), Y |θ , ξ ∼ Nr(θ + ξ, τ 2I ), ξ ∼ N(0, γ 2I ), and
6.5 For the situation of Example 6.3:(a) Show that δc(x, y) is minimax if 0 ≤ c ≤ 2.(b) Show that if ξ = 0, R(θ , δ1)=1 − σ2σ2+τ 2 r−2 r , R(θ , δcomb)=1 − σ2σ2+τ 2 , and,
6.4 Consider the problem of estimating the mean based on X ∼ Nr(θ , I ), where it is thought that θi = s j=1 βj t ji where (ti,...,tr) are known, (β1,...,βs) are unknown, and r − s > 2.(a)
6.3 In Example 6.2:(a) Show that kx is the MLE if θ ∈ Lk .(b) Show that δk (x) of (6.8) is minimax under squared error loss.(c) Verify that θi of the form (6.4) satisfy T (T T )−1T θ = θ
6.2 In Example 6.1, show that:(a) The estimator δL is minimax if r ≥ 4 and c ≤ 2.(b) The risk of δL is infinite if r ≤ 3(c) The minimum risk is equal to 3/r , and is attained at θ1 = θ2 =
6.1 Referring to Example 6.1, this problem will establish the validity of the expression(6.2) for the risk of the estimator δL of (6.1), using an argument similar to that in the proof of Theorem
5.25 In the spirit of Stein’s “large r and |θ |” argument, Casella and Hwang (1982) investigated the limiting risk ratio of δJ S (x) = (1−(r −2)/|x|2)x to that of x. If X ∼ Nr(θ , I
5.24 For the most part, the risk function of a Stein estimator increases as |θ | moves away from zero (if zero is the shrinkage target). To guarantee that the risk function is monotone increasing in
5.23 Let χ2 r (λ) be a χ2 random variable with r degrees of freedom and noncentrality parameter λ.(a) Show that E 1χ2 r (λ) = E 1E 1χ2 r+2K|K 2= E 1 r−2+2K!, where K ∼Poisson(λ/2).(b)
5.22 The early proofs of minimaxity of Stein estimators (James and Stein 1961, Baranchik 1970) relied on the representation of a noncentral χ2-distribution as a Poisson sum of central χ2 (TSH2,
5.21 Let Xi, Yj be independent N(ξi, 1) and N(ηj , 1), respectively (i = 1,...,r; j =1,...,s).(a) Find an estimator of (ξ1,...,ξr; η1,...,ηs) that would be good near ξi = ··· =ξr = ξ,η1
5.20 For X|θ ∼ Nr(θ , I ), George (1986a, 1986b) looked at multiple shrinkage estima-tors, those that can shrink to a number of different targets. Suppose that θ ∼ π(θ ) = k j=1 ωiπi(θ
5.19 The property of superharmonicity, and its relationship to minimaxity, is not restricted to Bayes estimators. For X ∼ Nr(θ , I ), a pseudo-Bayes estimator (so named, and investigated by Bock,
5.18 Verify (5.27).[Hint: Show that, as a function of |x|2, the only possible interior extremum is a minimum, so the maximum must occur either at |x|2 = 0 or |x|2 = ∞.]
5.17 Let X ∼ Nr(θ , I ). Show that the Bayes estimator of θ , against squared error loss, is given by δ(x) = x + ∇ log m(x) where m(x) is the marginal density function and∇f = {∂/∂xif }.
5.16 A natural extension of the estimator (5.10) is to one that shrinks toward an arbitrary known point µ = (µ1,...,µr),δµ(x) = µ +1 − c(S) r − 2|x − µ|2(x − µ)where |x − µ|2 =
5.15 There are various ways to seemingly generalize Theorems 5.5 and 5.9. However, if both the estimator and loss function are allowed to depend on the covariance and loss matrix, then linear
5.14 Brown (1975) considered the performance of an estimator against a class of loss functions L(C) = L : L(θ , δ) = r i=1 ci(θi − δi)2; (c1,...,cr) ∈ Cfor a specified set C, and proved the
5.13 Prove the following ”generalization” of Theorem 5.9.Theorem 8.2 Let X ∼ N(θ , ). An estimator of the form (5.13) is minimax against the loss L(θ , δ)=(θ − δ)Q(θ − δ),
5.12 Complete the proof of Theorem 5.9.(a) Show that the risk of δ(x) is R(θ , δ) = Eθ(θ − X)Q(θ − X)!−2Eθc(|X|2)|X|2 XQ(θ − X)+Eθc2(|X|2)|X|4 XQXwhere Eθ (θ − X)Q(θ −
Showing 200 - 300
of 6259
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Last