All Matches
Solution Library
Expert Answer
Textbooks
Search Textbook questions, tutors and Books
Oops, something went wrong!
Change your search query and then try again
Toggle navigation
FREE Trial
S
Books
FREE
Tutors
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Hire a Tutor
AI Study Help
New
Search
Search
Sign In
Register
study help
business
theory of probability
Questions and Answers of
Theory Of Probability
5.11 The estimation problem of (5.18), X ∼ N(θ , )L(θ , δ)=(θ − δ)Q(θ − δ), where both and Q are positive definite matrices, can always be reduced, without loss of generality, to the
5.10 In Theorem 5.7, show that condition (i) allows the most shrinkage when = σ2I , for some value of σ2. That is, show that for all r × r positive definite , maxtr λmax() = tr
5.9 In Theorem 5.7, verify Eθc(|X|2)|X|2 X(θ − X) = Eθc(|X|2)|X|2 tr() − 2 c(|X|2)|X|4 XX + 2 c(|X|2)|X|2 XX".[Hint: There are several ways to do this:(a) Write Eθc(|X|2)|X|2 X(θ −
5.8 (a) Let X ∼ N(θ , ) and consider the estimation of θ under the loss L(θ , δ) =(θ − δ)(θ − δ). Show that R(θ , X) = tr , the minimax risk. Hence, X is a minimax estimator.(b) Let
5.7 Faith (1978) considered the hierarchical model X|θ ∼ N(θ , I ),θ |t ∼ N0, 1 tI, t ∼ Gamma(a, b), that is,π(t) =1 H(a)ba t a−1 e−t/b.(a) Show that the marginal prior for θ ,
5.6 Consider a generalization of the Strawderman (1971) hierarchical model of Problem 5.5:X|θ ∼ N(θ , I ),θ |λ ∼ N(0, λ−1(1 − λ)I ),λ ∼ π(λ).(a) Show that the Bayes estimator
5.5 For the hierarchical model (5.11) of Strawderman (1971):(a) Show that the Bayes estimator against squared error loss is given by E(θ |x) =[1 − E(λ|x)]x where E(λ|x) =1 0
5.4 (a) Prove Theorem 5.5.(b) Apply Theorem 5.5 to establish conditions for minimaxity of Strawderman’s (1971)proper Bayes estimator given by (5.10) and (5.12).[Hint: (a) Use the representation of
5.3 Stigler (1990) presents an interesting explanation of the Stein phenomenon using a regression perspective, and also gives an identity that can be used to prove the minimaxity of the James-Stein
5.2 In the context of Theorem 5.1, show that Eθ 1|X|2≤ E0 1|X|2< ∞.[Hint: The chi-squared distribution has monotone likelihood ratio in the noncentrality parameter.]
5.1 Show that the estimator δc defined by (5.2) with 0 < c = 1 − W < 1 is dominated by any δd with |d − 1| < W.
4.14 A natural extension of risk domination under a particular loss is to risk domination under a class of losses. Hwang (1985) defines universal domination of δ by δ if the inequality EθL(|θ
4.13 Assuming (4.25), show that E = 1 − [(r − 2)2/r|X − µ|2] is the unique unbiased estimator of the risk (5..4.25), and that E is inadmissible. [The estimator E is also unbiased for
4.12 Let L be a family of loss functions and suppose there exists L0 ∈ L and a minimax estimator δ0 with respect to L0 such that in the notation of (4.29), sup L,θRL(θ,δ0) = supθRL0
4.11 In Example 4.8, show that X is admissible under the assumptions (ii)(a).[Hint:i. If v(t) > 0 is such that 1 v(t)e−t2/2τ 2 dt < ∞, show that there exists a constant k(τ ) for whichλτ (θ
4.10 In Example 4.7, show that X is admissible for (a) ρ3 and (b) ρ4.[Hint: (a) It is enough to show that X1 is admissible for estimating θ1 with loss (d1−θ1)2.This can be shown by letting
4.9 Show that the function ρ2 of Example 4.7 is convex.
4.8 In Example 4.7, show that R is nonsingular for ρ1 and ρ2 and singular for ρ3 and ρ4.
4.7 If S2 is distributed as χ2 r , use (2.2.5) to show that E(S−2)=1/(r − 2).
4.6 Let X1, X2,...,Xr be independent with Xi ∼ N(θi, 1). The following heuristic argument, due to Stein (1956b), suggests that it should be possible, at least for large r and hence large |θ|, to
2.8, results in inequality (4.18), which does not establish admissibility.(b) Stein (in James and Stein 1961), proposed the sequence of priors that works to prove X is admissible by the limiting
4.5 Establishing the admissibility of the normal mean in two dimensions is quite difficult, made so by the fact that the conjugate priors fail in the limiting Bayes method. Let X ∼ N2(θ , I ) and
4.4 Let Xi be independent with binomial distribution b(pi, ni), i = 1,...,r. For estimating p = (p1,...,pr) with average squared error loss (4.17), find the minimax estimator of p, and determine
4.3 Verify the Bayes estimator (4.15).
4.2 Show that a function µ satisfies (4.12) if and only if it depends only on X2 i .
4.1 In Example 4.2, show that an estimator δ is equivariant if and only if it satisfies (4.11)and (4.12).
3.15 Let the distribution of X depend on parameters θ and ϑ, let the risk function of an estimator δ = δ(x) of θ be R(θ,ϑ; δ), and let r(θ,δ) =R(θ,ϑ; δ) dP(ϑ) for some distribution P.
3.14 Prove the relations (3.22) and (3.23).
3.13 Show that the two estimators δ∗ and δ∗∗, defined by (3.20) and (3.21), respectively, are equivariant.
3.12 Show that the risk R(θ,δ) of (3.18) is finite.[Hint: R(θ,δ) < k>M|k+θ|1/(k + 1) ≤ c
3.11 (a) Show that the probabilities (3.17) add up to 1.(b) With pk given by (3.17), show that the risk (3.16) is infinite.[Hint: (a) 1/k(k + 1) = (1/k) − 1/(k + 1).]
3.10 Discuss Example 3.8 for the case that the random walk instead of being in the plane is (a) on the line and (b) in three-space.
3.9 In Example 3.8, let h(θ) be the length of the path θ after cancellation. Show that h does not satisfy conditions (3.2.11).
3.8 Prove (3.11).[Hint: In the term on the left side, lim inf can be replaced by lim. Let the left side of(3.11) be A and the right side B, and let AN = inf h(a, b), where the inf is taken over a ≤
3.7 Prove formula (3.9).
3.6 (a) If X1,...,Xn are iid with density f (x−θ), show that the MRE estimator against squared error loss [the Pitman estimator of (3.1.28)] is the Bayes estimator against right-invariant Haar
3.5 Let Y be distributed as G(y − η). If T = [Y ] and X = Y − T , find the distribution of X and show that it depends on η only through η − [η].
3.4 In Example 3.3, show that neither of the loss functions [(d − θ)∗∗]2 or |(d − θ)∗∗| is convex.
3.3 In Example 3.3, show that a loss function remains invariant under G if and only if it is a function of (d − θ)∗.
3.2 Verify the density (3.1).
3.1 Show that Theorem 3.2.7 remains valid for almost equivariant estimators.
2.25 Show that the natural parameter space of the family (2.16) is (−∞,∞) for the normal (variance known), binomial, and Poisson distribution but not in the gamma or negative binomial case.
2.24 Let X be distributed as N(θ , 1) and let θ have the improper prior density π(θ) = eθ(−∞
2.23 Let X be distributed with density 1 2β(θ)eθ x e−|x|, |θ| < 1.(a) Show that β(θ)=1 − θ 2.(b) Show that aX + b is admissible for estimating Eθ (X) with squared error loss if and only
2.22 Let X and Y be independently distributed according to Poisson distributions with E(X) = ξ and E(Y ) = η, respectively. Show that aX+bY +c is admissible for estimatingξ with squared error loss
2.21 Show that the conditions (2.41) and (2.42) of Example 2.22 are not only sufficient but also necessary for admissibility of (2.40).
2.20 Let X have the Poisson(λ) distribution, and consider the estimation of λ under the loss (d − λ)2/λ with the restriction 0 ≤ λ ≤ m, where m is known.(a) Using an argument similar to
2.19 Determine which estimators aX + b are admissible for estimating E(X) in the following situations, for squared error loss:(a) X has a Poisson distribution.(b) X has a negative binomial
2.18 Use Theorem 2.14 to provide an alternative proof for the admissibility of the estimator aX¯ + b satisfying (2.6), in Example 2.5.
2.17 Prove admissibility of the estimators corresponding to the interior of the triangle(2.39), by applying Theorem 2.4 and using the results of Example 4.1.5.
2.16 In Example 2.17, show that the estimator aX/n + b is inadmissible for all (a, b)outside the triangle (2.39).[Hint: Problems 2.4–2.6.]
2.15 Show the equivalence of the following relationships: (a) (2.26) and (2.27), (b) (2.34)and (2.35) when c = √(n − 1)/(n + 1), and (c) (2.38) and (2.39).
2.14 For the situation of Example 2.15, let Z = X/S. ¯(a) Show that the risk, under squared error loss, of δ = ϕ(z)s2 is minimized by takingϕ(z) = ϕ∗µ,σ (z) = E(S2/σ2|z)/E((S2/σ2)2|z).(b)
2.13 Let X1,...,Xn be iid according to a N(0, σ2) density, and let S2 = X2 i . We are interested in estimating σ2 under squared error loss using linear estimators cS2 + d, where c and d are
2.12 In Example 2.13, prove that the estimator aY +b is inadmissible when a > 1/(r +1).[Hint: Problems 2.4–2.6]
2.11 Suppose X has distribution Fξ and Y has distribution Gη, where ξ and η vary independently. If it is known that η = η0, then any estimator δ(X, Y ) can be improved upon byδ∗(x) = EY
2.10 For the situation of Example 2.10, show that:(a) max θ∈[−m,m]R(θ,aX¯ +b) = max{R(−m, aX¯ + b), R(m, aX¯ + b)}.(b) The estimator a∗X¯ , with a∗ = m2/( 1 n + m2), is the linear
2.9 For the situation of Example 2.9, show that:(a) without loss of generality, the restriction θ ∈ [a, b] can be reduced to θ ∈ [−m, m], m > 0.(b) If is the prior distribution that puts
2.8 A density function f (x|θ) is variation reducing of order n +1(V Rn+1) if, for any function g(x) with k (k ≤ n) sign changes (ignoring zeros), the expectation Eθg(X) =g(x)f (x|θ) dx has at
2.7 Brown (1986a) points out a connection between the information inequality and the unbiased estimator of the risk of Stein-type estimators.(a) Show that (2.7) implies R(θ,δ) ≥ [1 + b(θ)]2 n +
2.6 Show that if varθ (X)/E2θ (X) >λ> 0, an estimator [1/(1+λ)+ε]X+b is inadmissible(with squared error loss) under each of the following conditions:(a) if Eθ (X) > 0 for all θ,b > 0 and ε >
2.5 Show that an estimator [1/(1 + λ) + ε]X of Eθ (X) is inadmissible (with squared error loss) under each of the following conditions:(a) if varθ (X)/E2θ (X) >λ> 0 and ε > 0,(b) if varθ
2.4 Show that an estimator aX + b (0 ≤ a ≤ 1) of Eθ (X) is inadmissible (with squared error loss) under each of the following conditions:(a) if Eθ (X) ≥ 0 for all θ , and b < 0;
2.3 Prove part (d) in the second proof of Example 2.8, that there exists a sequence of values θi → −∞ with b(θi) → 0 .
2.2 Determine the Bayes risk of the estimator (2.4) when θ has the prior distribution N(µ, τ 2).
2.1 Lemma 2.1 has been extended by Berger (1990a) to include the case where the estimand need not be restricted to a finite interval, but, instead, attains a maximum or minimum at a finite parameter
1.33 (Efron and Morris 1971)(a) Show that the estimator δ of (1.50) is the estimator that minimizes |δ − cx¯| subject to the constraint |δ − ¯x| ≤ M. In this sense, it is the estimator
1.32 (a) If R(p, δ) is given by (1.49), show that sup R(p, δ)·4(1+√n)2 → 1 as n → ∞.(b) Determine the smallest value of n for which the Bayes estimator of Example 1.18 satisfies (1.48) for
1.31 Show that var(Y¯) given by (3.7.6) takes on its maximum value subject to (1.41)when all the a’s are 0 or 1.
1.30 Show that for fixed X and n, (1.43) → (1.11) as N → ∞.
1.29 Show that the estimator defined by (1.43)(a) has constant risk,(b) is Bayes with respect to the prior distribution specified by (1.44) and (1.45).
1.28 For the random variable X whose distribution is (1.42), show that x must satisfy the inequalities stated below (1.42).
1.27 In the linear model (3.4.4), show that aiξˆi (in the notation of Theorem 3.4.4) is minimax for estimating θ = aiξi with squared error loss, under the restriction σ2 ≤ M.[Hint: Treat the
1.26 LetX1,...,Xm and Y1,...,Yn be independently distributed asN(ξ,σ2) andN(η, τ 2), respectively, and consider the problem of estimating W = η − ξ with squared error loss.(a) If σ and τ
1.25 Let Xi (i = 1,...,n) be iid with unknown distribution F. Show thatδ = No. of Xi ≤ 0√n · 1 1 + √n +1 2(1 + √n)is minimax for estimating F(0) = P(Xi ≤ 0) with squared error loss.
1.24 Let Xi (i = 1,...,n) and Yj (j = 1,...,n) be independent with distributions F and G, respectively. If F(1) − F(0) = G(1) − G(0) = 1 but F and G are otherwise unknown, find a minimax
1.23 In Example 1.16(b), show that for any k > 0, the estimatorδ =√n 1 + √n 1n ni=1 Xk i +1 2(1 + √n)is a Bayes estimator for the prior distribution over F0 for which (1.36) was shown to be
1.22 (a) Verify (1.37).(b) Show that equality holds in (1.39) if and only if P(Xi = 0) + P(Xi = 1) = 1.
1.21 In Example 1.14, show that X¯ is minimax for the loss function (d −θ)2/σ2 without any restrictions on σ.
1.20 (a) In Example 1.9, determine the region in the (p1, p2) unit square in which (1.22)is better than the UMVU estimator of p2 − p1 for m = n = 2, 8, 18, and 32.(b) Extend Problems 1.11 and 1.12
1.19 Show that the risk function of (1.22) depends on p1 and p2 only through p1 + p2 and takes on its maximum when p1 + p2 = 1.
1.18 In Example 1.9, show that no linear estimator has constant risk.
1.17 Let r be given by (1.3). If r = ∞ for some , show that any estimator δ has unbounded risk.
1.16 Show that the problem of Example 1.8 remains invariant under the transformations X = n − X, p = 1 − p, d = 1 − d.This illustrates that randomized equivariant estimators may have to be
1.15 Let X = 1 or 0 with probabilities p and q, respectively, and consider the estimation of p with loss = 1 when |d − p| ≥ 1/4, and 0 otherwise. The most general randomized estimator is δ = U
1.14 Evaluate (1.16) and show that its maximum is 1 − α.
1.13 (a) Find two points 0 < p0 < p1 < 1 such that the estimator (1.11) for n = 1 is Bayes with respect to a distribution for which P(p = p0) + P(p = p1) = 1.(b) For n = 1, show that (1.11) is a
1.12 In Example 1.7, graph the risk functions of X/n and the minimax estimator (1.11)for n = 1, 4, 9, 16, and indicate the relative positions of the two graphs for large values of n.
1.11 In Example 1.7,(a) determine cn and show that cn → 0 as n → ∞,(b) show that Rn(1/2)/rn → 1 as n → ∞.
1.10 Find the bias of the minimax estimator (1.11) and discuss its direction.
1.9 In Example 1.7, let δ∗(X) = X/n with probability 1 − ε and = 1/2 with probabilityε. Determine the risk function of δ∗ and show that for ε = 1/(n + 1), its risk is constant and less
1.8 To see why elimination of semirelevant sets is too strong a requirement, consider the estimation of θ based on observing X ∼ f (x − θ). Show that for any constanta, the Pitman estimator X
1.7 Show that if there exists a set A ∈ X and an ε > 0 for which Eθ {[T (X)−τ (θ)]I (X ∈A)} > ε, then T (x) is inadmissible for estimating τ (θ) under squared error loss. (A set A
1.6 Suppose that X ∼ f (x|θ), and T (x) is used to estimate τ (θ). One might question the worth of T (x) if there were some set A ∈ X for which T (x) > τ (θ) for x ∈ A (or if the reverse
1.5 Establishing the fact that (9.1) holds, so S2 is conditionally biased, is based on a number of steps, some of which can be involved. Define φ(a, µ, σ2) = (1/σ2)Eµ,σ2 [S2 || ¯x|/s
1.4 (a) For the random effects model of Example 4.2.7 (see also Example 3.5.1), show that the restricted maximum likelihood (REML) likelihood of σ2 A and σ2 is given by (4.2.13), which can be
1.3 Classes of priors for H-minimax estimation have often been specified using moment restrictions.(a) For X ∼ b(p, n), find the H-minimax estimator of p under squared error loss, with Hµ =π(p)
1.2 The principle of gamma-minimaxity [first used by Hodges and Lehmann (1952); see also Robbins 1964 and Solomon 1972a, 1972b)] is a Bayes/frequentist synthesis. An estimator δ∗ is gamma-minimax
1.1 For the situation of Example 1.2:(a) Plot the risk functions of δ1/4, δ1/2, and δ3/4 for n = 5, 10, 25.(b) For each value of n in part (a), find the range of prior values of p for which each
=+n/12 has an approximate standard normal distribution [195]. (Hints: The distribution of An is the same as the distribution of Dn = n − 1 − An, the number of descents. Now let Vj = (U1 +
Showing 300 - 400
of 6259
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Last