All Matches
Solution Library
Expert Answer
Textbooks
Search Textbook questions, tutors and Books
Oops, something went wrong!
Change your search query and then try again
Toggle navigation
FREE Trial
S
Books
FREE
Tutors
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Hire a Tutor
AI Study Help
New
Search
Search
Sign In
Register
study help
business
theory of probability
Questions and Answers of
Theory Of Probability
4.11 (a) In Example 4.11, show that(Xijk − µ − αi − βj − γij )2 = S2 + S2µ + S2α + S2β + S2γwhere S2 = (Xijk−Xij ·)2, S2µ = IJm(X···−µ)2, S2α = J
4.10 In the model defined by (4.26) and (4.27), determine the UMVU estimators of αi,βj , and σ2 under the assumption that the γij are known to zero.
4.9 The coefficient vectors of the Xijk given by (4.32) for µˆ, αˆi, and βˆj are orthogonal to the coefficient vectors for the γˆij given by (4.33).
4.8 In Example 4.9, find the UMVU estimator of µ when the αi are known to be zero and compare it with µˆ.
4.7 (a) In Example 4.9, show that the vectors of the coefficients in the αˆi are not orthogonal to the vector of the coefficients of µˆ.b) Show that the conclusion of (a) is reversed if αˆi and
4.6 Let Xij be independent N(ξij , σ2) with ξij = αi + βtij . Find the UMVU estimators of the αi and β.
4.5 In Example 4.2, find the UMVU estimators of α, β, γ , and σ2 when ti = 0 andt 2 i = 1.
4.4 (a) In Example 4.7, determine αˆ, βˆ, and hence ξˆi by minimizing (Xi − α − βti)2.(b) Verify the expressions (4.12) for α and β, and the corresponding expressions for αˆand βˆ.
4.3 Use Problem 3.10 to prove (iii) of Theorem 4.3.
4.2 Write out explicit expressions for the transformations (4.10) when O is given by(a) ξi = α + βti and (b) ξi = α + βti + γ t 2 i .
4.1 (a) Suppose Xi : N(ξi, σ2) with ξi = α + βti. If the first column of the matrix C leading to the canonical form (4.7) is (1/√n, . . . , 1/√n), find the second column of C.(b) If Xi :
3.42 Suppose we letX1,...,Xn be a sample from an exponential distribution f (x|µ, σ) =(1/σ)e−(x−µ)/σ I (x ≥ µ). The exponential distribution is useful in reliability theory, and a
3.41 Let (Xi, Yi), i = 1,...,n, be distributed as independent bivariate normal random variables with mean (µ, 0) and covariance matrixσ11 σ12σ21 σ22 .(a) Show that the probability model is
3.40 Let f (t) = 1π1 1+t2 be the Cauchy density, and consider the location-scale family F =1σ fx − µσ, −∞
3.39 Suppose in Problem 3.37 that an MRE estimator δ∗ of W = η − ξ under the transformations Xi = a + bXi and Y j = a + bYj ,b > 0, exists when the ratio τ/σ = c is known and that δ∗ is
3.38 In the model of Problem 3.37 with τ = σ, discuss the equivariant estimation of W = η − ξ with loss function (d − W)2/σ2 and obtain explicit results for the three distributions of that
3.37 Obtain the MRE estimator of θ = (τ/σ)r with the loss function of Problem 3.35 when the density of Problem 3.36 specializes to 1σ mτ n Oifxi − ξσOjfyj − ητand f is (a) normal,
3.36 Generalize the results of Problem 3.34 to the case that the joint density of X and Y is 1σ mτ n fx1 − ξσ ,..., xm − ξσ ;y1 − ητ ,..., yn − ητ.
3.35 Under the assumptions of the preceding problem and with loss function (d −θ)2/θ 2, determine the MRE estimator of θ in the following situations:(a) m = n = 1 and X and Y are independently
3.34 Let X1,...,Xm and Y1,...,Yn have joint density 1σ mτ n f x1σ ,..., xmσ ;y1τ ,..., ynτ, and consider the problem of estimating θ = (τ/σ)r with loss function L(σ, τ ;d) =γ (d/θ). This
3.33 For the estimation of in Note 9.3:(a) Show that the loss function in (9.2) is invariant.(b) Show that Stein’s loss L(δ, ) = tr(δ−1) − log |δ−1| − p, where |A| is the
3.32 For the situation of Note 9.3:(a) Show that equivariant estimators of are of the form cS, where S is the crossproducts matrix and c is a constant.(b) Show that EI {tr[(cS − I )(cS − I )]}
3.31 For X1,..., Xn iid as Np(µ, ), the cross-products matrix S is defined by S = {Sij } =n k=1(xik − ¯xi)(xjk − ¯xj )where x¯i = (1/n)n k=1 xik . Show that, for = I ,(a) EI [trS] = EIp
3.30 For the situation of Note 9.3, consider the equivariant estimation of µ.(a) Show that an invariant loss is of the form L(µ, , δ) = L((µ − δ)−1(µ − δ)).(b) The equivariant
3.29 In (9.1), show that the group X = AX+b induces the groupµ = Aµ+b, = AA.
3.28 Lele (1993) uses invariance in the study of mophometrics, the quantitative analysis of biological forms. In the analysis of a biological object, one measures data X on k specific points called
3.27 Determine the bias of the estimator δ∗(X) of Example 3.18.
3.26 Show that δ satisfies (3.35) if and only if it satisfies (3.40) and (3.41).
3.24 For the situation of Example 3.13:(a) Show that an estimator is equivariant if and only if it can be written in the formϕ(x/s ¯ )s2.(b) Show that the risk of an equivariant estimator is a
3.23 If G is a group, a subset G0 of G is a subgroup of G if G0 is a group under the group operation of G.(a) Show that the scale group (3.32) is a subgroup of the location-scale group (3.24)(b) Show
3.22 Verify the estimator δ∗ of Example 3.12.
3.21 (a) If δ0 satisfies (3.7) and cδ0 satisfies (3.22), show that cδ0 cannot be unbiased in the sense of satisfying E(cδ0) ≡ τ r.(b) Prove the statement made in Example 3.10.
3.20 Let X1,...,Xn be iid from the distribution N(θ,θ 2).(a) Show that this probability model is closed under scale transformations.
3.19 (a) Show that the loss function Ls of (3.20) is convex and invariant under scale transformations.(b) Prove Corollary 3.8.(c) Show that for the situation of Example 3.7, if the loss function is
3.18 In the preceding problem, find var(X1) and its MRE estimator for n = 2, 3, 4 when the loss function is (3.13) with r = 2.
3.17 Let X1,...,Xn be iid each with density (2/τ )[1 − (x/τ )], 0
3.16 Prove formula (3.19).
3.15 In the preceding problem, find the MRE estimator of var(X1) when the loss function is (3.13) with r = 2.
3.14 Let X1,...,Xn be iid according to the exponential distribution E(0, τ ). Determine the MRE estimator of τ for the loss functions (a) (3.13) and (b) (3.15) with r = 1.
3.13 In Example 3.7, find the MRE estimator of var(X1) when the loss function is (a)(3.13) and (b) (3.15) with r = 2.
3.12 Show that the MRE estimators of Problem 3.11, parts (b) and (c), are risk-unbiased, but not mean-unbiased.
3.11 Let X1,...,Xn be iid according to the uniform distribution u(0, θ).(a) Show that the complete sufficient statistic X(n) is independent of Z [given by Equation (3.8)].(b) For the loss function
3.10 Under the assumptions of Theorem 3.3:(a) Show that the MRE estimator under the loss (3.13) is given by (3.14).(b) Show that the MRE estimator under the loss (3.15) is given by (3.11), where
3.9 Determine the scale median of X when the distribution of X is (a) U(0, θ) and (b)E(0, b).
3.8 Under the assumptions of Problem 3.7(a), the set of scale medians of X is an interval.If f (x) > 0 for all x > 0, the scale median of X is unique.
3.7 Let X be a positive random variable.(a) If EX < ∞, then the value of c that minimizes E|X/c − 1| is a solution to EXI (X ≤c) = EXI (X ≥ c), which is known as a scale median.(b) Let Y have
3.6 Let X be a positive random variable. Show that:
3.5 The function ρ of Corollary 3.4 with γ defined in Example 3.5 is strictly convex for p ≥ 1.
3.4 A necessary and sufficient condition for δ to satisfy (3.7) is that it is of the formδ = δ0/u with δ0 and u satisfying (3.7) and (3.9), respectively.
3.3 Show that the bias of any equivariant estimator of τ r in (3.1) is proportional to τ r.
3.2 Show that if δ(X) is scale invariant, so is δ∗(X) defined to be δ(X) if δ(X) ≥ 0 and= 0 otherwise, and the risk of δ∗ is no larger than that of δ for any loss function (3.5) for which
3.1 (a) A loss function L satisfies (3.4) if and only if it satisfies (3.5) for some γ .(b) The sample standard deviation, the mean deviation, the range, and the MLE of τall satisfy (3.7) with r =
2.28 Show that the transformation of Example 2.11 and the identity transformation are the only transformations leaving the family of binomial distributions invariant.
2.27 Suppose that the variables Xij in Problem 2.23 are independently distributed as N(ξi, σ2), σ is known. Show that:(a) The MRE estimator of θ is then ciX¯i − v∗, where X¯i = (Xi1 +
2.26 (a) Generalize Theorem 1.10 and Corollary 1.12 to the situation of Problems 2.23 and 2.25. (b) Show that the MRE estimators of (a) can be chosen to be independent of W.
2.25 If δ0 is any equivariant estimator of θ in Problem 2.23, and if yi = (xi1 − xini, xi2 −xini,...,xini−1 − xini), show that the most general equivariant estimator of θ is of the
2.24 Generalize Theorem 1.4 to the situation of Problem 2.23.
2.23 Let Xij , j = 1,...,ni, i = 1,...,s, and W be distributed according to a density of the forms i=1 fi(xi − ξi)h(w)where xi−ξi = (xi1−ξi,...,xini −ξi), and consider the problem of
2.22 If δ(X) is MRE for estimating ξ in Example 2.2(i) with loss function ρ(d −ξ ), state an optimum property of eδ(X) as an estimator of eξ .
2.21 Let θ be real-valued and h strictly increasing, so that (2.11) is vacuously satisfied.If L(θ,d) is the loss resulting from estimating θ byd, suppose that the loss resulting from estimating
2.20 In Example 2.14, determine the totality of equivariant estimators of W under the smallest group G containing G1 and G2.
2.19 Show that:(a) In Example 2.14(i), X is not risk-unbiased.(b) The group of transformations ax + c of the real line (0
2.18 If δ(X) is an equivariant estimator of h(θ) under a group G, then so is g∗δ(X) with g∗ defined by (2.12) and (2.13), provided G∗ is commutative.
2.17 (a) In Example 2.12, determine the smallest group G containing both G1 and G2.(b) Show that the only estimator that is invariant under G is δ(X, Y) ≡ 0.
2.16 (a) If g is the transformation (2.20), determine g¯.(b) In Example 2.12, show that (2.22) is not only sufficient for (2.14) but also necessary.
2.15 Prove Corollary 2.13.
2.14 For the situation of Example 2.12:(a) Show that the class of transformations is a group.(b) Show that estimators of the form ϕ(x/s ¯ 2)s2, where x¯ = 1/nxi and s2 = (xi − ¯x)2 are
2.13 For the situation of Example 2.11:(a) Show that the class of transformations is a group.(b) Show that equivariant estimators must satisfy δ(n − x)=1 − δ(x).(c) Show that, using an
2.12 In an invariant estimation problem, write X = (T,W) where T is sufficient for θ, and W is ancillary. If the group of transformations is transitive, show:(a) The best equivariant estimator δ∗
2.11 In an invariant probability model, write X = (T,W), where T is sufficient for θ, and W is ancillary .(a) If the group operation is transitive, show that any invariant statistic must be
2.10 To illustrate the difference between functional equivariance and formal invariance, consider the following.To estimate the amount of electric power obtainable from a stream, one could use the
2.9 If θ is the true temperature in degrees Celsius, then θ = gθ¯ = θ + 273 is the true temperature in degrees Kelvin. Given an observation X, in degrees Celsius:(a) Show that an estimator
2.8 Show that:(a) If (2.11) holds, the transformations g∗ defined by (2.12) are 1 : 1 from H onto itself.(b) If L(θ,d) = L(θ,d) for all θ implies d = d, then g∗ defined by (2.14) is unique,
2.7 Let X be distributed as N(ξ,σ2), −∞
2.6 (a) The transformations g∗ defined by (2.12) satisfy (g2g1)∗ = g∗2 · g∗1 and (g∗)−1 =(g−1)∗.(b) If G is a group leaving (2.1) invariant and G∗ = {g∗, g ∈ G}, then G∗ is a
2.5 Show that a loss function satisfies (2.9) if and only if it is of the form (2.10).
2.4 Under the assumptions of Problem 2.3, show that(a) the transformations g¯ satisfy g2g1 = g¯2 · ¯g1 and (g¯)−1 = (g−1);(b) the transformations g¯ corresponding to g ∈ G form a
2.3 Let {gX, g ∈ G} be a group of transformations that leave the model (2.1) invariant.If the distributions Pθ , θ ∈ are distinct, show that the induced transformations g¯ are 1 : 1
2.2 In Example 2.2(ii), show that the transformations x = −x together with the identity transformation form a group.
2.1 Show that the class G(C) is a group.
1.22 Let δ0 be location equivariant and let U be the class of all functions u satisfying(1.20) and such that u(X) is an unbiased estimator of zero. Then, δ0 is MRE if and only if cov[δ0, u(X)] = 0
1.21 Under the assumptions of Theorem 1.10, if there exists an equivariant estimator δ0 of ξ with finite expected squared error, show that(a) E0(|Xn| | Y) < ∞ with probability 1;(b) the set B =
1.20 For any density f of X = (X1,...,Xn), the probability of the set A = {x : 0 < ∞−∞ f (x − u) du < ∞} is 1. [Hint: With probability 1, the integral in question is equal to the marginal
1.19 Suppose the X’s and Y ’s are distributed as in Problem 1.17 but with m = n. Determine the MRE estimator of W when the loss is squared error.
1.18 In Problem 1.13, suppose that X and Y are independent and that the loss function is squared error. If ξˆ and ηˆ are the MRE estimators of ξ and η, respectively, the MRE estimator of W is
1.17 In Problem 1.13, suppose the X’s and Y ’s are independently distributed as E(ξ , 1)and E(η, t), respectively, and that m = n. Find conditions on ρ under which the MRE estimator of W is
1.16 In Problem 1.13, suppose the X’s and Y ’s are independently normally distributed with known variances σ2 and τ 2. Find conditions on ρ under which the MRE estimator is Y¯ − X¯ .
1.15 In Problem 1.13, determine the totality of estimators satisfying the restriction when m = n = 1.
1.14 Under the assumptions of the preceding problem, prove the equivalents of Theorems 1.4–1.17 and Corollaries 1.11–1.14 for estimators satisfying the restriction.
1.13 Suppose X1,...,Xm and Y1,...,Yn have joint density f (x1 − ξ,... ,xm − ξ ; y1 −η, . . . , yn − η) and consider the problem of estimating W = η − ξ . Explain why it is desirable
1.12 Show that an estimator δ(X) of g(θ) is risk-unbiased with respect to the loss function of Problem 1.10 if Fθ [g(θ)] = B/(A + B), where Fθ is the cdf of δ(X) under θ.
1.11 In Example 1.16, find the MRE estimator of ξ when the loss function is given by Problem 1.10.
1.10 Consider the loss functionρ(t) =−At if t < 0 Bt if t ≥ 0 (A, B ≥ 0).If X is a random variable with density f and distribution functionF, show that Eρ(X−v)is minimized for any v
1.9 Let X1,...,Xn be distributed as in Example 1.19 and let the loss function be that of Example 1.15. Determine the totality of MRE estimators and show that the midrange is one of them.
1.8 Prove Corollary 1.14. [Hint: Show that (a) φ(v) = E0ρ(X − v) → M as v → ±∞and (b) that φ is continuous; (b) follows from the fact (see TSH2, Appendix Section 2)that if fn, n = 1,
1.7 Let Xi(i = 1, 2, 3) be independently distributed with density f (xi −ξ ) and let δ = X1 if X3 > 0 and = X2 if X3 ≤ 0. Show that the estimator δ of ξ has constant risk for any invariant
1.6 If T is a sufficient statistic for the family (1.9), show that the estimator (1.28) is a function of T only. [Hint: Use the factorization theorem.]
1.5 For each of the three loss functions of Example 1.18, compare the risk of the MRE estimator to that of the UMVU estimator.
1.4 Under the assumptions of Example 1.18, show that (a) E[X(1)] = b/n and (b)med[X(1)] = b log 2/n.
1.3 If X1 and X2 are distributed according to (1.9) with n = 2 and f satisfying the assumptions of Problem 1.2, and if ρ is convex and even, then the MRE estimator of ξis (X1 + X2)/2.
Showing 1000 - 1100
of 6259
First
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
Last