All Matches
Solution Library
Expert Answer
Textbooks
Search Textbook questions, tutors and Books
Oops, something went wrong!
Change your search query and then try again
Toggle navigation
FREE Trial
S
Books
FREE
Tutors
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Hire a Tutor
AI Study Help
New
Search
Search
Sign In
Register
study help
business
theory of probability
Questions and Answers of
Theory Of Probability
6.29 If a minimal sufficient statistic exists, a necessary condition for a sufficient statistic to be complete is for it to be minimal. [Hint: Suppose that T = h(U) is minimal sufficient and U is
6.28 If Y is distributed as E(η, 1), the distribution of X = e−Y is U(0, e−η). (This result is useful in the computer generation of random variables; see Problem 4.4.14.)
6.27 Formulate a general result of which Problems 6.25(a) and 6.26(a) are special cases.
6.26 Solve the preceding problem if(a) R is the set of all triangles with sides parallel to the x axis, the y axis, and the line y = x, respectively.(b) R is the subset of R in which the sides
6.25 Let (Xi, Yi), i = 1,...,n, be iid according to the uniform distribution over a set R in the (x, y) plane and let P be the family of distributions obtained by letting R range over a class R of
6.24 (Messig and Strawderman 1993) Show that for the general dose-response model pθ (x) = m i=1ni xi[ηθ (di)]xi [1 − ηθ (di)]ni−xi , the statistic X = (X1, X2,...,Xm) is minimal
6.23 For the situation of Example 6.27:(a) Show that X = (X1, X2) is minimal sufficient for the family (6.16) with restriction(6.17).(b) Establish (6.18), and hence that the minimal sufficient
6.22 For the situation of Example 6.26, show that X is minimal sufficient and complete.
6.21 For the situation of Example 6.25(ii), find an unbiased estimator of ξ based on Xi, and another based on X2 i ); hence, deduce that T = (Xi,X2 i ) is not complete.
6.20 (a) Show that in the N(θ,θ) curved exponential family, the sufficient statistic T =(xi,x2 i ) is not minimal.(b) For the density of Example 6.19, show that T = (xi,x2 i ,x3 i ) is a
6.19 Show that the sufficient statistics of (i) Problem 6.3 and (ii) Problem 6.4 are minimal sufficient.
6.18 Show that the statistics X(1) and [Xi − X(1)] of Problem 6.17(c) are independently distributed as E(a, b/n) and bGamma(n − 2, 1) respectively.[Hint: If a = 0 and b = 1, the variables Yi =
6.17 Solve the preceding problem for the following cases:(a) P = {E(θ , 1), −∞
6.16 Let X1,...,Xn be iid according to a distribution from a family P. Show that T is minimal sufficient in the following cases:(a) P = {U(0, θ),θ > 0}; T = X(n).(b) P = {U(θ1, θ2), −∞ < θ1
6.15 Let X1,...,Xn be iid according to a distribution from P = {U(0, θ),θ > 0}, and let P0 be the subfamily of P for which θ is rational. Show that every P0-null set in the sample space is also a
6.14 In Lemma 6.14, show that the assumption of common support can be replaced by the weaker assumption that every P0-null set is also a P-null set so that (a.e. P0) is equivalent to (a.e. P).
6.13 Let k = 1 and Pi = U(i, i + 1), i = 0, 1.(a) Show that a minimal sufficient statistic forP = {P0, P1}is T (X) = i ifi
6.12 In Problem 6.11 it is not enough to replace pi(X) by p0(X). To see this let k = 2 and p0 = U(−1, 0), p1 = U(0, 1), and p2(x)=2x, 0
6.11 Prove the following generalization of Theorem 6.12 to families without common support.Theorem 9.1 Let P be a finite family with densities pi, i = 0,...,k, and for any x, let S(x) be the set of
6.10 Show that the order statistics are minimal sufficient for the location family (6.7) when f is the density of(a) the double exponential distribution D(0, 1).(b) the Cauchy distribution C(0, 1).
6.9 (a) If (x1,...,xn) and (y1,...,yn), have the same elementary symmetric functionsxi = yi, i=j xiyj = i=j yiyj ,...,x1 ··· xn = y1 ··· yn, then the y’s are a permutation of the
6.8 Let X1,...,Xn be iid according to N(σ, σ2), 0 < σ. Find a minimal set of sufficient statistics.
6.7 Let X1,...,Xm and Y1,...,Yn be independently distributed according to N(ξ,σ2)and N(η, τ 2), respectively. Find the minimal sufficient statistics for these cases:(a) ξ , η, σ, τ are
6.6 Prove Corollary 6.13.
6.5 Show that each of the statistics T1 − T4 of Example 6.11 is sufficient.
6.4 Let f be a positive integrable function defined over (−∞,∞) and let pξ ,η(x) be the probability density defined by pξ ,η(x) = c(ξ,η)f (x) if ξ
6.3 Let f be a positive integrable function over (0,∞), and let pθ (x) be the density over(0, θ) defined by pθ (x) = c(θ)f (x) if 0
6.2 Let X1,...,Xn be iid according to a distribution F and probability density f . Show that the conditional distribution given X(i) = a of the i −1 values to the left of a and the n−i values to
6.1 Extend Example 6.2 to the case that X1,...,Xr are independently distributed with Poisson distributions P(λi) where λi = aiλ (ai > 0, known).
5.33 Morris (1982, 1983b) investigated the properties of natural exponential families with quadratic variance functions. There are only six such families: normal, binomial, gamma, Poisson, negative
5.32 If Y is distributed as H(α, b), determine the distribution of c log Y and show that for fixed α and varying b it defines an exponential family.
5.31 Let X1,...,Xn be independently distributed as H(α, b). Show that the joint distribution is a two-parameter exponential family and identify the functions ηi, Ti, and B of(5.1).
5.30 When the Xi are independently distributed according to Poisson distributions P(λi), find the distribution of Xi.
5.29 If Xi are independently distributed according to H(αi, b), show that Xi is distributed as H(αi, b). [Hint: Method 1. Prove it first for the sum of two gamma variables by a transformation to
5.28 (a) Let X be distributed with density pθ (x) given by (5.1), and let A be any fixed subset of the sample space. Then, the distributions of X truncated on A, that is, the distributions with
5.27 (a) If X is a random column vector with expectation ξ , then the covariance matrix of X is cov(X) = E[(X − ξ )(X − ξ )].(b) If the density of X is (4.15), then ξ = a and cov(X) = .
5.26 If (X, Y ) is distributed according to the bivariate normal distribution (4.16) withξ = η = 0:(a) Show that the moment generating function of (X, Y ) is MX,Y (u1, u2) = e−[u2
5.25 A random variable X has the Pareto distribution P(c, k) if its cdf is 1 − (k/x)c, x>k> 0, c > 0.(a) The distributions P(c, 1) constitute a one-parameter exponential family (5.2) withη = −c
5.24 Determine the values α for which the density (5.41) is (a) a decreasing function of x on (0,∞) and (b) increasing for xx0(0 < x0). In case (b), determine the mode of the density.
5.23 In Example 5.14, show that(a) χ2 1 is the distribution of Y 2 where Y is distributed as N(0, 1);(b) χ2 n is the distribution of Y 2 1 + ··· + Y 2 n where the Yi are independent N(0, 1).
5.22 The inverse Gaussian distribution, IG(λ,µ), has density function$ λ2π e(λµ)1/2 x−3/2 e− 1 2 ( λx +µx), x> 0, λ,µ > 0.(a) Show that this density constitutes an exponential
5.21 The Stein Identity can also be applied to discrete exponential families, as shown by Hudson (1978) and generalized by Hwang (1982a). If X takes values in N = {0, 1,...,}with probability function?
5.20 As an alternative to the approach of Problem 5.19(b) for calculating the moments of X ∼ B(a, b), a general formula for EXk (similar to equation (5.43)) can be derived.Do so, and use it to
5.19 Using Lemma 5.15:(a) Derive the form of the identity for X ∼ Gamma(α,b) and use it to verify the moments given in (5.44).(b) Derive the form of the identity for X ∼ Beta(a, b), and use it
5.18 (a) Prove Lemma 5.15. (Use integration by parts.)(b) By choosing g(x) to be x2 and x3, use the Stein Identity to calculate the third and fourth moments of the N(µ, σ2) distribution.
5.17 For the gamma distribution (5.41).(a) verify the formulas (5.42), (5.43), and (5.44);(b) show that (5.43), with the middle term deleted, holds not only for all positive integers r but for all
5.16 As an alternative to using (5.14) and (5.15), obtain the moments (5.16) by representing each Xi as a sum of n indicators, as was done in (5.5):
5.15 For the multinomial distribution (5.4), verify the moment formulas (5.16).
5.14 The distribution of Problem 5.12 with a(x)=1/x and C(θ) = − log(1 − θ), x =1, 2,... ; 0
5.13 Show that the binomial, negative binomial, and Poisson distributions are special cases of the power series distribution of Problem 5.12, and determine θ and C(θ).
5.12 A discrete random variable with probabilities P(X = x) = a(x)θ x /C(θ), x = 0, 1,... ; a(x) ≥ 0; θ > 0, is a power series distribution. This is an exponential family (5.1) with s = 1, η =
5.11 In the preceding problem, let Xi + 1 be the number of trials required after the (i −1)st success has been obtained until the next success occurs. Use the fact that X = m i=1Xi to find an
5.10 In a Bernoulli sequence of trials with success probability p, let X + m be the number of trials required to achieve m successes.(a) Show that the distribution of X, the negative binomial
5.9 For the Poisson distribution (5.32), verify the moments (5.35).
5.8 For the binomial distribution (5.28), verify (a) the moment generating function (5.30)and (b) the moments (5.31).
5.7 Verify the relations (a) (5.22) and (b) (5.26).
5.6 In the density (5.1)(a) For s = 1 show that Eθ [T (X)] = B(θ)/η(θ) and varθ [T (X)] = B(θ)[η(θ)]2 − η(θ)B(θ)[η(θ)]3 .(b) For s > 1, show that Eθ [T (X)] = J −1∇B
5.5 Let (X1, X2) have a bivariate normal distribution with mean vector ξ = (ξ1, ξ2) and identity the covariance matrix. In each of the following situations, verify the curvature,γθ of the
5.4 Efron (1975) gives very general definitions of curvature, which generalize (10.1) and(10.2). For the s-dimensional family (5.1) with covariance matrix θ , if θ is a scalar, define the
5.3 Show that the distribution of a sample from the p-variate normal density (4.15) constitutes an s-dimensional exponential family. Determine s and identify the functions ηi, Ti, and B of (5.1).
5.2 Suppose in (5.2), s = 2 and T2(x) = T1(x). Explain why it is impossible to estimate η1.[Hint: Compare the model with that obtained by putting η1 = η1 +c, η2 = η2 − c.]
5.1 Determine the natural parameter space of (5.2) when s = 1, T1(x) = x, µ is Lebesgue measure, and h(x) is (i) e−|x| and (ii) e−|x|/(1 + x2).
4.16 Let X1,...,Xr have a multivariate normal distribution with E(Xi) = ξi and with covariance matrix . If X is the column matrix with elements Xi and B is an r × r matrix of constants, then BX
4.15 The following two families of distributions are not group families:(a) The class of binomial distributions b(p, n), with n fixed and 0
4.14 If F and F0 are two continuous, strictly increasing cdf’s on the real line, and if the cdf of U is F0 and g is strictly increasing, show that the cdf of g(U) is F if and only if g = F −1(F0).
4.13 Let U be a positive random variable, and let X = bU1/c, b> 0, c> 0.(a) Show that this defines a group family.(b) If U is distributed as E(0, 1), then X is distributed according to the Weibull
4.12 Generalize the transformation group of Example 4.10 to the case of s populations{yij , j = 1,...,Ni}, i = 1,...,s, with a random sample of size ni being drawn from the ith population.
4.11 Find a modification of the transformation group (4.22) which generates a random sample from a population {y1,...,yN } where the y’s, instead of being arbitrary, are restricted to (a) be
4.10 Show that the family of all continuous distributions whose support is an interval with positive lower end point is a group family. [Hint: Let U be uniformly distributed on the interval (2, 3)
4.9 In the preceding problem, show that G can be replaced by the subgroup G0 of lower triangular matricesB = (bij ), in which the diagonal elements b11,...,bpp are all positive, but that no proper
4.8 Show that the totality of nonsingular multivariate normal distributions can be obtained by the subgroup G of (4.12) described in Problem 4.7.
4.7 Show that the family of transformations (4.12) with B nonsingular and lower triangular form a group G.
4.6 Show that for p = 2, the density (4.15) specializes to (4.16).
4.5 If g0 is any element of a group G, show that as g ranges over G so does gg0.
4.4 Show that a transformation group is a group.
4.3 Let U be uniformly distributed on (0, 1) and consider the variables X = Uα, 0 < α.Show that this defines a group family, and determine the density of X.
4.2 If X is distributed according to the uniform distribution U(0, θ), show that the distribution of − log X is exponential.
4.1 If the distributions of a positive random variable X form a scale family, show that the distributions of log X form a location family.
3.8 Suppose X and Y are independent random variables with X ∼ E(λ, 1) and Y ∼E(µ, 1). It is impossible to obtain direct observations of X and Y . Instead, we observe the random variables Z and
3.7 Let P and Q assign probabilities P : PX = 1 n= pn > 0, n = 1, 2,... (pn = 1), Q : P(X = 0) =1 2; PX = 1 n= qn > 0; n = 1, 2,... qn = 1 2.Then, show that P and Q have the same support but
3.6 If P and Q are two probability measures over the same Euclidean space which are equivalent, then they have the same support.
3.5 Let S be the support of a distribution on a Euclidean space (X , A). Then, (i) S is closed;(ii) P(S) = 1; (iii) S is the intersection of all closed sets C with P(C) = 1.
3.4 In Example 3.1, show that the support of P is [a, b] if and only if F is strictly increasing on [a, b].
3.3 Let X be a measurable transformation from (E, B) to (X , A) (i.e., such that for any A ∈ A, the set {e : X(e) ∈ A} is in B), and let Y be a measurable transformation from(X , A) to (Y, C).
3.2 Show that any function f which satisfies (3.7) is continuous.
3.1 Let X have a standard normal distribution and let Y = 2X. Determine whether(a) the cdf F(x, y) of (X, Y ) is continuous.(b) the distribution of (X, Y ) is absolutely continuous with respect to
2.11 Let f (x) = 1 or 0 as x is rational or irrational. Show that the Riemann integral of f does not exist.
2.10 Let X = {x1, x2,...}, µ = counting measure on X , and f integrable. Then fdµ =f (xi). [Hint: Suppose, first, that f ≥ 0 and let sn(x) be the simple function, which is f (x) for x =
2.9 If f is integrable with respect to µ, so is |f |, and fdµ ≤ |f | dµ. [Hint: Express|f | in terms of f + and f −.]
2.8 If f and g are measurable functions, so are (i) f + g, and (ii) max(f, g).
2.7 Let (X , A, µ) be a measure space and let B be the class of all sets A ∪ C with A ∈ A and C a subset of a set A ∈ A with µ(A) = 0. Show that B is a σ-field.
2.6 Under the assumptions of Problems 2.1 and 2.3, show that IA(x) = lim inf IAk (x) and IA(x) = lim sup IAk (x)where IA(x) denotes the indicator of the set A.
2.5 For any sequence of real numbers a1, a2,..., show that the set of all limit points of subsequences is closed. The smallest and largest such limit point (which may be infinite)are denoted by lim
2.4 Show that(a) If A1 ⊂ A2 ⊂··· , then A = A = ∪ An.(b) If A1 ⊃ A2 ⊃··· , then A = A = ∩ An.
2.3 Under the assumptions of Problem 2.1, let A = lim inf An = {x : x ∈ An for all except a finite number of n’s}, A = lim sup An = {x : x ∈ An for infinitely many n}.Then, A and A are in A.
2.2 For any a
2.1 If A1, A2,... are members of a σ-field A (the A’s need not be disjoint), so are their union and intersection.
1.13 (a) If X is binomial b(p, n), show that Ex n − p= 2 n − 1 k − 1pk(1 − p)n−k+1 for k − 1 n ≤ p ≤k n.(b) Graph the risk function of part (i) for n = 4 and n = 5.[Hint: For
1.12 (a) Let f (x) = (1/2)(k − 1)/(1 + |x|)k , k ≥ 2. Show that f is a probability density and that all its moments of order < k − 1 are finite.(b) The density of part (a) satisfies the
1.11 (a) If two estimators δ1, δ2 have continuous symmetric densities fi(x − θ), i = 1, 2, and f1(0) > f2(0), then P[|δ1 − θ| < c] > P[|δ2 − θ| < c] for some c > 0 and hence δ1 will be
Showing 1300 - 1400
of 6259
First
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
Last