All Matches
Solution Library
Expert Answer
Textbooks
Search Textbook questions, tutors and Books
Oops, something went wrong!
Change your search query and then try again
Toggle navigation
FREE Trial
S
Books
FREE
Tutors
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Hire a Tutor
AI Study Help
New
Search
Search
Sign In
Register
study help
business
theory of probability
Questions and Answers of
Theory Of Probability
2.16 In a sample size N = n + k + 1, some of the observations are missing. Assume that(Xi, Yi), i = 1,...,n, are iid according to the bivariate normal distribution (2.16), and that U1,...,Uk and
2.15 If (X1, Y1),..., (Xn, Yn) are iid according to any bivariate distribution with finite second moments, show that SXY /(n − 1) given by (2.17) is an unbiased estimator of cov(Xi, Yi).
2.14 For the model (2.15) find the UMVU estimator of P(X1 < Y1) when (a) σ = τ and(b) when σ and τ are arbitrary. [Hint: Use the conditional density (2.13) of X1 given X, S ¯ 2 X and that of Y1
2.13 Show that in the preceding problem with γ unknown,
2.12 Assuming (2.15) with η = ξ and σ2/τ 2 = γ , show that when γ is known:(a) T defined in Example 2.3(iii) is a complete sufficient statistic;(b) δγ is UMVU for ξ .
2.11 Assuming (2.15) with σ = τ , determine the UMVU estimators of σ2 and (η−ξ )/σ.
2.10 Verify Equation (2.14), the density of (X1−X¯ )/S in normal sampling. [The UMVU estimator in (2.13) is used by Kiefer (1977) as an example of his estimated confidence approach.]
2.9 In Example 2.2 with n = 1, the UMVU estimator of p is the indicator of the event X1 ≤ u whether σ is known or unknown.
2.8 Let Xi, i = 1,...,n, be independently distributed as N(α + βti, σ2) where α, β, andσ2 are unknown, and the t’s are known constants that are not all equal. Find the UMVU estimators of α
2.7 If X is a single observation from N(ξ,σ2), show that no unbiased estimator δ ofσ2exists when ξ is unknown. [Hint: For fixed σ =a, X is a complete sufficient statistic for ξ , and E[δ(X)]
2.6 (a) Determine the variance of the estimator Problem 2.5.(b) Find the UMVU estimator of the variance in part (a).
2.5 In Example 2.1, when both parameters are unknown, show that the UMVU estimator of ξ 2 is given by δ = X¯ 2 − S2 n(n−1) where now S2 = (Xi − X¯ )2.
2.4 Suppose, as in Example 2.1, that X1,...,Xn are iid as N(ξ,σ2), with one of the parameters known, and that the estimand is a polynomial in ξ or σ. Then, the UMVU estimator is a polynomial in
2.3 In Example 2.1 with σ known, let δ = ciXi be any linear estimator of ξ . If δ is biased, show that its risk E(δ − ξ )2 is unbounded. [Hint: If ci = 1+ k, the risk is≥ k2ξ 2.]
2.2 Solve the preceding problem when σ is unknown.
2.1 If X1,...,Xn are iid as N(ξ,σ2) with σ2 known, find the UMVU estimator of (a)ξ 2, (b) ξ 3, and (c) ξ 4. [Hint: To evaluate the expectation of X¯ k , write X¯ = Y + ξ , where Y is N(0,
1.22 Let X take on the values 1 and 0 with probability p and q, respectively, and assume that 1/4
1.21 In n Bernoulli trials, let Xi = 1 or 0 as the ith trial is a success or failure, and let T = Xi. Solve Problem 1.17 by Method 2, using the fact that an unbiased estimator of p3 is δ = 1 if X1
1.20 Solve Problem 1.18(b) by Method 2, using the fact that an unbiased estimator of e−λ is δ = 1 if X1 = 0, and δ = 0 otherwise.
1.19 Let X1,...,Xn be distributed as in Example 1.14. Use Method 1 to find the UMVU estimator of θ k for any integer k > −n.
1.18 Let X1,...,Xn be iid according to the Poisson distribution P(λ). Use Method 1 to find the UMVU estimator of (a) λk for any positive integer k and (b) e−λ.
1.17 If T has the binomial distribution b(p, n) with n > 3, use Method 1 to find the UMVU estimator of p3.
1.16 (a) If X1,...,Xn are iid (not necessarily normal) with var(Xi) = σ2 < ∞, show that δ = (Xi − X¯ )2/(n − 1) is an unbiased estimator of σ2.(b) If the Xi take on the values 1 and 0 with
1.15 Suppose X1,...,Xn are iid Poisson (λ).(a) Show that X¯ is the UMVU estimator for λ.(b) For S2 = n i=1(Xi −X¯ )2/(n−1), we have that ES2 = EX¯ = λ. To directly establish that var S2 >
1.14 Completeness of T is not only sufficient but also necessary so that every g(θ) that can be estimated unbiasedly has only one unbiased estimator that is a function of T .
1.13 If δ1 and δ2 are in W and are UMVU estimators of g1(θ) and g2(θ), respectively, then a1δ1 + a2δ2 is also in W and is UMVU for estimating a1g1(θ) + a2g2(θ), for any real a1 and a2.
1.12 Show that if δ(X) is a UMVU estimator of g(θ), it is the unique UMVU estimator of g(θ). (Do not assume completeness, but rather use the covariance inequality and the conditions under which it
1.11 Use Theorem 1.7 to find UMVU estimators of some of the ηθ (di) in the doseresponse model (1.6.16), with the restriction (1.6.17) (Messig and Strawderman 1993).Let the classes W and U be
1.10 If estimators are restricted to the class of linear estimators, characterization of best unbiased estimators is somewhat easier. Although the following is a consequence of Theorem 1.7, it should
1.9 In Example 1.9, (a) determine all unbiased estimators of zero; (b) show that no nonconstant estimator is UMVU.
1.8 If δ and δ have finite variance, so does δ − δ. [Hint: Problem 1.5.]
1.7 Suppose X is distributed on (0, 1) with probability density pθ (x) = (1 − θ) + θ/2√x for all 0
1.6 An alternative proof of the Schwarz inequality is obtained by noting that(f + λg)2 dP =f 2 dP + 2λfg dP + λ2g2 dP ≥ 0 for all λ, so that this quadratic in λ has at most one root.
1.5 (a) Any two random variables X and Y with finite second moments satisfy the covariance inequality [cov(X, Y )]2 ≤ var(X) · var(Y ).(b) The inequality in part (a) is an equality if and only if
1.4 For a sample of size n, suppose that the estimator T (x) of τ (θ) has expectation E[T (X)] = τ (θ) +∞k=1 ak nk , where ak may depend on θ but not on n.(a) Show that the expectation of the
1.3 Let X take on the values −1, 0, 1, 2, 3 with probabilities P(X = −1) = 2pq and P(X = k) = pkq3−k for k = 0, 1,2,3.(a) Check that this is a probability distribution.(b) Determine the LMVU
1.2 In Example 1.5, show that a∗i minimizes (1.6) for i = 0, 1, and simplify the expression for a∗0 . [Hint: κpκ−1 and κ(κ − 1)pκ−2 are the first and second derivatives of pκ =1/q.]
1.1 Verify (a) that (1.4) defines a probability distribution and (b) condition (1.5).
8.27 Prove Theorem 8.22. [Hint: Make a Taylor expansion as in the proof of Theorem 8.12 and use Problem 4.16.]
8.26 Let (Xn, Yn) have a bivariate normal distribution with means E(Xn) = E(Yn) = 0, variances E(X2 n) = E(Y 2 n ) = 1, and with correlation coefficient ρn tending to 1 as n → ∞.(a) Show that
8.25 In generalization of the notation o and O, let us say that Yn = op(1/kn) if knYn → 0 in probability and that Yn = Op(1/kn) if knYn is bounded in probability. Show that the results of Problems
8.24 A sequence of random variables Yn is bounded in probability if given any ε > 0, there exist M and n0 such that P(|Yn| > M) < ε for all n>n0. Show that if Yn converges in law, then Yn is
8.23 Suppose kn/kn → ∞.(a) If Rn = 0(1/kn) and Rn = 0(1/kn), then Rn + Rn = 0(1/kn).(b) If Rn = o(1/kn) and Rn = o(1/kn), then Rn + Rn = o(1/kn).
8.22 (a) If Rn and Rn are both O(1/kn), so is Rn + Rn.(b) If Rn and Rn are both o(1/kn), so is Rn + Rn.
8.21 A sequence of numbers Rn is said to be o(1/kn) as n → ∞ if knRn → 0 and to be O(1/kn) if there exist M and n0 such that |knRn| < M for all n>n0 or, equivalently, if knRn is bounded.(a) If
8.20 Prove Theorem 8.16.[Hint: Under the assumptions of the theorem we have the Taylor expansion h(x1,...,xs) = h(ξ1,...,ξs) + (xi − ξi) ∂h∂ξi+ Riwhere Ri → 0 as xi → ξi.]
8.19 Serfling (1980, Section 3.1) remarks that the following variations of Theorem 8.12 can be established. Show that:(a) If h is differentiable in a neighborhood of θ, and h is continuous at θ,
8.18 (a) The function v(·) is a variance stabilizing transformation if the estimator v(Tn)has asymptotic variance τ 2(θ)[v(θ)]2 =c, where c is a constant independent of θ.
8.17 Variance stabilizing transformations are transformations for which the resulting statistic has an asymptotic variance that is independent of the parameters of interest. For each of the following
8.16 If Tn satisfies √n[Tn − θ] L→ N(0, τ 2), find the limiting distribution of (a) T 2 n , (b)log |Tn|, (c) 1/Tn, and (d) eTn (suitably normalized).
8.15 If Tn > 0 satisfies √n[Tn − θ] L→ N(0, τ 2), find the limiting distribution of (a) √T n and (b) log Tn (suitably normalized).
8.14 (a) In Example 8.7(i) and (ii), Yn → 0 in probability. Show that:(b) If Hn denotes the distribution function of Yn in Example 8.7(i) and (ii), then Hn(a) → 0 for all a < 0 and Hn(a) → 1
8.13 Show that if Yn → c in probability, then it tends in law to a random variable Y which is equal to c with probability 1.
8.12 Suppose that kn[δn − g(θ)] tends in law to a continuous limit distribution H. Prove that:(a) If kn/kn → d = 0 or ∞, then kn[δn − g(θ)] also tends to a continuous limit
8.11 Suppose X1,...,Xn have a common mean ξ and variance σ2, and that cov(Xi, Xj ) =ρj−i. For estimating ξ , show that:(a) X¯ is not consistent if ρj−i = ρ = 0 for all i = j ;(b) X¯ is
8.10 (a) In Example 8.5, find the value of p1 for which pk becomes independent of k.(b) If p1 has the value given in (a), then for any integers i1 < ··· < ir and k, the joint distribution of Xi1
8.9 (a) In Example 8.5, find cov(Xi, Xj ) for any i = j .(b) Verify (8.10).
8.8 (a) If δn is consistent for θ, and g is continuous, then g(δn) is consistent for g(θ).(b) Let X1,...,Xn be iid as N(θ , 1), and let g(θ) = 0 if θ = 0 and g(0) = 1. Find a consistent
8.7 If {an} is a sequence of real numbers tending toa, and if bn = (a1 + ··· + an)/n, then bn → a.
8.6 Verify Equation (8.9).
8.5 Referring to Example 8.4, show that cnS2 nP→ σ2 for any sequence of constants cn → 1.In particular, the MLE σˆ 2 = n−1 n S2 n is a consistent estimator of σ2.
8.4 (a) If An, Bn, and Yn tend in probability toa, b, and y, respectively, then An + BnYn tends in probability to a + by.(b) If An takes on the constant value an with probability 1 and an →a, then
8.3 Suppose ρ(x) is an even function, nondecreasing and non-negative for x ≥ 0 and positive for x > 0. Then, E{ρ[δn − g(θ)]} → 0 for all θ implies that δn is consistent for estimating
8.2 To see that the converse of Theorem 8.2 does not hold, let X1,...,Xn be iid with E(Xi) = θ, var(Xi) = σ2 < ∞, and let δn = X¯ with probability 1 − εn and δn = An with probability εn.
8.1 (a) Prove Chebychev’s Inequality: For any random variable X and non-negative function g(·), P(g(X) ≥ ε) ≤1εEg(X)for every ε > 0 . (In many statistical applications, it is useful to take
7.27 Generalize Corollary 7.19 to the case where X and µ are vectors.
7.26 Let φ be a strictly convex function defined over an interval I (finite or infinite). If there exists a value a0 in I minimizing φ(a), then a0 is unique.
7.25 Let ρ be a real-valued function satisfying 0 ≤ ρ(t) ≤ M < ∞ and ρ(t) → M as t → ±∞, and let X be a random variable with a continuous probability density f . Then φ(a) =E[ρ(X
7.24 (a) Suppose that f and ρ satisfy the assumptions of Problem 7.23 and that f is strictly decreasing on [0,∞). Then, if φ(a0) < ∞ for some a0, φ(a) has a unique minimum at zero unless there
7.23 Let f be a unimodal density symmetric about 0, and let L(θ,d) = ρ(d − θ) be a loss function with ρ nondecreasing on (0,∞) and symmetric about 0.(a) The function φ(a) = E[ρ(X − a)]
7.22 Prove that statements made in Example 7.20(i) and (ii).
7.21 Show that the loss functions (7.24) are continuously differentiable.
7.20 If f and g are real-valued functions such that f 2, g2 are measurable with respect to the σ-finite measure µ, prove the Schwarz inequalityfg dµ2≤f 2 dµg2 dµ.[Hint: Write fg dµ =
7.19 Show that φ(x, y) = −√xy is convex over x > 0,y > 0.
7.18 Show that if f is defined and bounded over (−∞,∞) or (0,∞), then f cannot be convex (unless it is constant).
7.17 Use the convexity of the function φ of Problem 7.13 to show that the natural parameter space of the exponential family (5.2) is convex.
7.16 (a) If f : p → is superharmonic, then ϕ(f (·)) is also superharmonic, whereϕ : → is a twice-differentiable increasing concave function.(b) If h is superharmonic, then h∗(x) =
7.15 A function is lower semicontinuous at the point y if f (y) ≤ lim infx→y f (x). The definition of superharmonic can be extended from continuous to lower semicontinuous functions.(a) Show that
7.14 Determine whether the following functions are super- or subharmonic:(a) k i=1 xp i , p< 1, xi > 0.
7.13 (a) Show that φ(x) = exi is convex by showing that its Hessian matrix is positive semidefinite.(b) Show that the result of Problem 7.4 remains valid if φ is a convex function defined over an
7.12 Show that f (a) = √|x − a| + √|y − a| is minimized by a = x and a = y.
7.11 Show that the k-dimensional sphere k i=1x2 1 ≤ c is convex.
7.10 Let U be uniformly distributed on (0, 1), and let F be a distribution function on the real line.(a) If F is continuous and strictly increasing, show that F −1(U) has distribution function
7.9 A slightly different form of the Rao-Blackwell theorem, which applies only to the variance of an estimator rather than any convex loss, can be established without Jensen’s inequality.(a) For
7.8 Prove Jensen’s inequality for the case that X takes on the values x1,...,xn with probabilities γ1,...,γn(γi = 1) directly from (7.1) by induction over n.
7.7 Establish the following lemma, which is useful in examining the risk functions of certain estimators. (For further discussion, see Casella 1990).Lemma 9.2 Let r : [0,∞) → [0,∞) be concave.
7.6 Show that if equality holds in (7.1) for some 0
7.5 Prove or disprove by counterexample each of the following statements. If φ is convex on (a, b), then so is (i) eφ(x) and (ii) log φ(x) if φ > 0.
7.4 If φ is convex on (a,b) and ψ is convex and nondecreasing on the range of φ, show that the function ψ[φ(x)] is convex on (a, b).
7.3 Give an example showing that a convex function need not be continuous on a closed interval.
7.2 Show that xp is concave over (0,∞) if 0
7.1 Verify the convexity of the functions (i)-(vi) of Example 7.3.
6.38 If X1,...,Xn are iid as B(a, b),(a) Show that [OXi, O(1 − Xi)] is minimal sufficient for (a, b).(b) Determine the minimal sufficient statistic when a = b.
6.37 Under the assumptions of Theorem 6.5, let A be any fixed set in the sample space, P∗θthe distribution Pθ truncated on A, and P∗ = {P∗θ , θ ∈ }. Then prove(a) if T is sufficient for
6.35 Use Basu’s theorem to prove independence of the following pairs of statistics:(a) X and (Xi − X)2 where the X’s are iid as N(ξ,σ2).(b) X(1) and [Xi − X(1)] in Problem 6.18.6.36 (a)
6.34 Suppose that X1,...,Xn are an iid sample from a location-scale family with distribution function F((x − a)/b).(a) If b is known, show that the differences (X1 − Xi)/b, i = 2,...,n, are
6.33 Let X1,...,Xn be iid each with density f (x) (with respect to Lebesgue measure), which is unknown. Show that the order statistics are complete.[Hint: Use Problem 6.32(a) with P0 the class of
6.32 (a) Show that if P0,P1 are two families of distributions such that P0 ∈ P1 and every null set of P0 is also a null set of P1, then a sufficient statistic T that is complete for P0 is also
6.31 For each of the following problems, determine whether the minimal sufficient statistic is complete: (a) Problem 6.7(a)-(c); (b) Problem 6.25(a)-(c); (c) Problem 6.26(a) and(b).
6.30 Show that the minimal sufficient statistics T = (X(1), X(n)) of Problem 6.16(b) are complete. [Hint: Use the approach of Example 6.24.]
Showing 1200 - 1300
of 6259
First
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Last