All Matches
Solution Library
Expert Answer
Textbooks
Search Textbook questions, tutors and Books
Oops, something went wrong!
Change your search query and then try again
Toggle navigation
FREE Trial
S
Books
FREE
Tutors
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Hire a Tutor
AI Study Help
New
Search
Search
Sign In
Register
study help
business
theory of probability
Questions and Answers of
Theory Of Probability
7.14 Let Xj (j = 1,...,n) be independently distributed with densities fj (xj |θ) (θ realvalued), let Ij (θ) be the information Xj contains about θ, and let Tn(θ) = n j=1Ij (θ) be the total
7.13 Generalize the preceding problem to the situation in which (a) E(Xi) = α + βti and var(Xi) = 1 and (b) E(Xi) = α + βti and var(Xi) = σ2 where α, β, and σ2 are unknown parameters to be
7.12 Let Xi (i = 1,...,n) be independent normal with variance 1 and mean βti (with ti known). Discuss the estimation of β along the lines of Example 7.7.
7.11 Find suitable normalizing constants for δn of Example 7.7 when (a) γi = i, (b)γi = i2, and (c) γi = 1/ i.
7.10 Show that the estimator δn of Example 7.7 satisfies (7.14).
7.9 In Example 7.7, show that Yn is less informative than Y .[Hint: Let Zn be distributed as P(λ∞i=n+1γi) independently of Yn. Then, Yn + Zn is a sufficient statistic for λ on the basis of (Yn,
7.8 (a) If the cdf F is symmetric and if log F(x) is strictly concave, so is log[1−F(x)].(b) Show that log F(x) is strictly concave when F is strongly unimodal but not when F is Cauchy.
7.7 In Example 7.6, suppose that pi = 1−F(α + βti) and that both log F(x) and log[1−F(x)] are strictly concave. Then, the likelihood equations have at most one solution.
7.6 Show that the likelihood equations (7.11) have at most one solution.
7.5 In the preceding problem, find the efficiency gain (if any)(a) in part (a) resulting from the knowledge that ρ = ρ(b) in part (b) resulting from the knowledge that σ = σ and τ = τ .
7.3 In Example 7.4, determine the joint distribution of (a) (σˆ 2, τˆ2) and (b) (σˆ 2, σˆ 2 A).7.4 Consider samples (X1, Y1),..., (Xm, Ym) and (X1, Y 1),..., (Xn, Y n) from two bivariate
7.2 For the situation of Example 7.3 with m = n:(a) Show that a necessary condition for (7.5) to converge to N(0, 1) is that √n(λˆ −λ) →0, where λˆ = σˆ 2/τˆ2 and λ = σ2/τ 2, for
7.1 Prove Theorem 7.1.
6.17 In Example 6.10, show that the conditions of Theorem 5.1 are satisfied
6.15 Let X1,...,Xn be iid with E(Xi) = θ, var(Xi) = 1, and E(Xi − θ)4 = µ4, and consider the unbiased estimators δ1n = (1/n)X2 i − 1 and δ2n = X¯ 2 n − 1/n of θ 2.(a) Determine the ARE
6.14 Let X1,...,Xn be iid as N(0, σ2).(a) Show that δn = k|Xi|/n is a consistent estimator of σ if and only if k = √π/2.(b) Determine the ARE of δ with k = √π/2 with respect to the MLE
6.13 For the situation of Example 6.9, consider as another family of distributions, the contaminated normal mixture family suggested by Tukey (1960) as a model for observations which usually follow a
6.12 Show that the efficiency (6.27) tends to 0 as |a − θ|→∞.
6.11 Let X, ...,Xn be iid according to the Poisson distribution P(λ). Find the ARE ofδ2n = [No. of Xi = 0]/n to δ1n = e−X¯ n as estimators of e−λ.
6.10 Verify the limiting distribution asserted in (6.21).
6.9 Consider the situation leading to (6.20), where (Xi, Yi),i = 1,...,n, are iid according to a bivariate normal distribution with E(Xi) = E(Yi) = 0, var(Xi) = var(Yi) = 1, and unknown correlation
6.8 Verify the matrices (a) (6.17) and (b) (6.18).
6.7 In Example 6.4, show that the Sjk given by (6.15) are independent of (X1,..., Xp)and have the same joint distribution as the statistics (6.13) with n replaced by n − 1.[Hint: Subject each of
6.6 In Example 6.4, verify the MLEs ξˆi and σˆjk when the ξ ’s are unknown.
6.5 Let X1,...,Xn be iid from a H(α, β) distribution with density 1/(H(α)βα) × xα−1 e−x/β .(a) Calculate the information matrix for the usual (α, β) parameterization.(b) Write the
6.4 If θ = (θ1,...,θr, θr+1,...,θs) and if cov ∂∂θi L(θ ), ∂∂θj L(θ )= 0 for any i ≤ r < j, then the asymptotic distribution of (θˆ1,..., θˆr) under the assumptions of
6.3 Verify (6.5).
6.2 In Example 6.1, verify Equation (6.4).
6.1 In Example 6.1, show that the likelihood equations are given by (6.2) and (6.3).
5.6 Show that there exists a function f of two variables for which the equations ∂f (x, y)/∂x =0 and ∂f (x, y)/∂y = 0 have a unique solution, and this solution is a local but not a global
5.5 Prove Corollary 5.4.
5.4 Let (X0,...,Xs) have the multinomial distribution M(p0,...,ps; n).(a) Show that the likelihood equations have a unique root.(b) Show directly that the MLEs pˆi are asymptotically efficient.
5.3 Let X1,...,Xn be iid according to N(ξ,σ2).(a) Show that the likelihood equations have a unique root.(b) Show directly (i.e., without recourse to Theorem 5.1) that the MLEs ξˆ and σˆ are
5.2 (a) Show that (5.26) with the remainder term neglected has the same form as (5.15)and identify the Ajkn.(b) Show that the resulting ajk of Lemma 5.2 are the same as those of (5.23).(c) Show that
5.1 (a) If a vector Yn in Es converges in probability to a constant vectora, and if h is a continuous function defined over Es, show that h(Yn) → h(a) in probability.(b) Use (a) to show that the
4.19 There is a connection between the EM algorithm and Gibbs sampling, in that both have their basis in Markov chain theory. One way of seeing this is to show that the incomplete-data likelihood is
4.18 The EM algorithm can also be implemented in a Bayesian hierarchical model to find a posterior mode. Recall the model (4.5.5.1), X|θ ∼ f (x|θ),|λ ∼ π(θ|λ), ∼ γ (λ), where interest
4.17 Verify (4.30).
4.16 Maximum likelihood estimation in the probit model of Section 3.6 can be implemented using the EM algorithm. We observe independent Bernoulli variablesX1,...,Xn, which depend on unobservable
4.15 For the one-way layout with random effects (Example 3.5.1), the EM algorithm is useful for computing ML estimates. (In fact, it is very useful in many mixed models;see Searle et al. 1992,
4.14 In the two-way layout (see Example 3.4.11), the EM algorithm can be very helpful in computing ML estimators in the unbalanced case. Suppose that we observe Yijk : N(ξij , σ2), i = 1,... ,I, j
4.13 For the situation of Example 4.10:(a) Show that the M-step of the EM algorithm is given byµˆ =4 i=1 ni j=1 yij + z1 + z2/12,αˆi =2 j=1 yij + zi/3 − ˆµ, i = 1, 3=3 j=1 yij/3 − ˆµ,
4.12 For the mixture distribution of Example 4.7, that is, Xi ∼ θg(x) + (1 − θ)h(x), i = 1, . . . , n, independent where g(·) and h(·) are known, an EM algorithm can be used to find the ML
4.11 In the EM algorithm, calculation of the E-step, the expectation calculation, can be complicated. In such cases, it may be possible to replace the E-step by a Monte Carlo evaluation, creating the
4.10 Show that if the EM complete-data density f (y, z|θ) of (4.21) is in a curved exponential family, then the hypotheses of Theorem 4.12 are satisfied.
4.9 Consider the following 12 observations from a bivariate normal distribution with parameters µ1 = µ2 = 0, σ2 1 , σ2 2 , ρ:x1 1 1 -1 -1 2 2 -2 -2 * * * *x2 1 -1 1 -1 * * * * 2 2 -2 -2 where
4.8 Without using Theorem 4.8, in Example 4.13 show that the EM sequence converges to the MLE.
4.7 In Theorem 4.8, show that σ11 = σ12.
4.6 In Example 4.7, if η = ξ , show how to obtain a √n-consistent estimator by equating sample and population second moments.
4.5 In Example 4.7, show that l(θ) is concave.
4.4 In Example 4.5, evaluate the estimators (4.8) and (4.14) for the Cauchy case, using for θ˜n the sample median.
4.3 Show that the density (4.4) with = (0,∞) satisfies all conditions of Theorem 3.10.
4.2 Show that the density (4.1) with = (0,∞) satisfies all conditions of Theorem 3.10 with the exception of (d) of Theorem 2.6.
4.1 Let u(t) =c t0 e−1/x(1−x)dx for 0
3.29 To establish the measurability of the sequence of roots θˆ∗n of Theorem 3.7, we can follow the proof of Serfling (1980, Section 4.2.2) where the measurability of a similar sequence is
3.28 Under the assumptions of Theorem 3.7, suppose that θˆ1n and θˆ2n are two consistent sequences of roots of the likelihood equation. Prove that Pθ0 (θˆ1n = θˆ2n) → 1 as n →
3.27 Let X1,...,Xn be iid according to θg(x) + (1 − θ)h(x), where (g, h) is a pair of specified probability densities with respect to µ, and where 0
3.26 In Example 3.12, show directly that (1/n)T (Xi) is an asymptotically efficient estimator of θ = Eη[T (X)] by considering its limit distribution.
3.25 For X1,...,Xn iid as DE(θ , 1), show that (a) the sample median is an MLE of θand (b) the sample median is asymptotically normal with variance 1/n, the information inequality bound.
3.24 Check that the assumptions of Theorem 3.10 are satisfied in Example 3.12.
3.23 Let X1,...,Xn be iid according to N(θ, aθ 2),θ > 0, where a is a known positive constant.(a) Find an explicit expression for an ELE of θ.(b) Determine whether there exists an MRE estimator
3.22 Under the assumptions of Theorem 3.2, show thatLθ0 +1√n− L(θ0) +1 2I (θ0)/I (θ0)tends in law to N(0, 1).
3.21 Let X1,...,Xn be iid according to a Weibull distribution with density fθ (x) = θ xθ−1 e−xθ, x> 0,θ > 0, which is not a member of the exponential, location, or scale family.
3.20 If X1,...,Xn are iid according to the gamma distribution H(θ , 1), the likelihood equation has a unique root.[Hint: Use Example 3.12. Alternatively, write down the likelihood and use the fact
3.19 If X1,...,Xn are iid as C(θ , 1), then for any fixed n there is positive probability (a)that the likelihood equation has 2n − 1 roots and (b) that the likelihood equation has a unique
3.18 In Problem 3.15(b), with f the Cauchy density C(0, a), the likelihood equation has a unique root aˆ and √n(aˆ −a) L→ N(0, 2a2).
3.17 If X1,...,Xn are iid with density f (xi −θ) or af (axi) and f is the logistic density L(0, 1), the likelihood equation has unique solutions θˆ and aˆ both in the location and the scale
3.16 For each of the following densities, f (·), determine if (a) it is strongly unimodal and (b) xf (x)/f (x) is strictly decreasing for x > 0. Hence, comment on whether the respective location
3.15 (a) A density function isstrongly unimodal, or equivalently log concave, if log f (x)is a concave function. Show that such a density function has a unique mode.(b) Let X1,...,Xn be iid with
3.14 Let X have the negative binomial distribution (2.3.3). Find an ELE of p.
3.13 Consider a sample X1,...,Xn from a Poisson distribution conditioned to be positive, so that P(Xi = x) = θ x e−θ /x!(1 − e−θ ) for x = 1, 2,.... Show that the likelihood equation has a
3.12 Let X be distributed as N(θ , 1). Show that conditionally given a
3.11 Verify the nature of the roots in Example 3.9.
3.10 In Example 3.6 with 0
3.9 Prove (3.9).
3.8 Prove the existence of unique 0 < ak < ak−1, k = 1, 2,..., satisfying (3.4).
3.7 Show that Theorem 3.2 remains valid if assumption A1 is relaxed to A1: There is a nonempty set 0 ∈ such that θ0 ∈ 0 and 0 is contained in the support of each Pθ .
3.6 When is finite, show that the MLE is consistent if and only if it satisfies (3.2).
3.5 Let X take on the values 0 and 1 with probabilities p and q, respectively. When it is known that 1/3 ≤ p ≤ 2/3, (a) find the MLE and (b) show that the expected squared error of the MLE is
3.4 Suppose X1,...,Xn are iid as N(ξ , 1) with ξ > 0. Show that the MLE is X¯ when X >¯ 0 and does not exist when X¯ ≤ 0.
3.3 Let X1,...,Xn be iid according to N(ξ,σ2). Determine the MLE of (a) ξ when σ is known, (b) σ when ξ is known, and (c) (ξ,σ) when both are unknown.
3.2 In the preceding problem, show that the MLE does not exist when p is restricted to 0
3.1 Let X have the binomial distribution b(p, n), 0 ≤ p ≤ 1. Determine the MLE of p(a) by the usual calculus method determining the maximum of a function;(b) by showing that px qn−x ≤ (x/n)x
2.14 In Example 2.7, show that if θn = c/√n, then Rn(θn) → a2 + c2(1 − a)2.
2.13 Let bn(θ) = Eθ (δn) − θ be the bias of the estimator δn of Example 2.5.(a) Show that bn(θ) = −(1 − a)√n √4 n−√4 n xφ(x − √nθ) dx;(b) Show that bn(θ) → 0 for any θ
2.12 In Example 2.7 with Rn(θ) given by (2.11), show that Rn(θ) → 1 for θ = 0 and that Rn(0) → a2.
2.11 Construct a sequence {δn} satisfying (2.2) but for which the bias bn(θ) does not tend to zero.
2.10 In the preceding problem, construct δn such that w(θ) = v(θ) for all θ = θ0 and θ1 and < v(θ) for θ = θ0 and θ1.
2.9 Let δn be any estimator satisfying (2.2) with g(θ) = θ. Construct a sequence δn such that √n(δn − θ) L→ N[0, w2(θ)] with w(θ) = v(θ) for θ = θ0 and w(θ0) = 0.
2.8 Verify the asymptotic distribution claimed for δn in Example 2.5.
2.7 For the situation of Problem 2.6:(a) Calculate the mean squared errors of both δn and X(n) as estimators of θ.(b) Show lim n→∞E(X(n) − θ)2 E(δn − θ)2 = 2.
2.6 Let X1,...,Xn be iid as U(0, θ). From Example 2.1.14, δn = (n + 1)X(n)/n is the UMVU estimator of θ, whereas the MLE is X(n). Determine the limit distribution of (a)n[θ − δn] and (b) n[θ
2.5 If X1,...,Xn are iid n(µ, σ2), show that Sr = [1/(n−1)(xi − ¯x)2]r/2 is an asymptotically unbiased estimator of σr.
2.4 If X1, ..., Xn are a sample from a one-parameter exponential family (1.5.2), thenT (Xi) is minimal sufficient and E[(1/n)T (Xi)] = (∂/∂η)A(η) = τ . Show that for any function g(·) for
2.3 Assume that the distribution of Yn = √n(δn − g(θ)) converges to a distribution with mean 0 and variance v(θ). Use Fatou’s lemma (Lemma 1.2.6) to establish that varθ (δn) → 0 for all
2.2 If kn[δn − g(θ)] L→ H for some sequence kn, show that the same result holds if kn is replaced by kn, where kn/kn → 1.
2.1 Let X1,...,Xn be iid as N(0, 1). Consider the two estimators Tn =X¯ n if Sn ≤ an n if Sn > an, where Sn = (Xi − X¯ )2, P(Sn > an)=1/n, and T n = (X1 + ··· + Xkn )/kn with kn
1.39 Let bm,n, m, n = 1, 2,..., be a double sequence of real numbers, which for each fixed m is nondecreasing in n. Show that limn→∞ limm→∞ bm,n = limm,n→∞ inf bm,n and limm→∞
1.38 (a) In Problem 1.37, determine to what values var(Yn)
1.37 Let Yn be distributed as N(0, 1) with probability πn and as N(0, τ 2 n ) with probability 1−πn. If τn → ∞ and πn → π, determine for what values of π the sequence {Yn} does and
Showing 100 - 200
of 6259
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Last