All Matches
Solution Library
Expert Answer
Textbooks
Search Textbook questions, tutors and Books
Oops, something went wrong!
Change your search query and then try again
Toggle navigation
FREE Trial
S
Books
FREE
Tutors
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Hire a Tutor
AI Study Help
New
Search
Search
Sign In
Register
study help
business
theory of probability
Questions and Answers of
Theory Of Probability
=+19. Let X1 and X2 be independent random variables with common exponential density λe−λx on (0, ∞). Show that the random variables Y1 = X1 + X2 and Y2 = X1/X2 are independent, and find their
=+18. Suppose X and Y are independent random variables concentrated on the interval (0, ∞). If E(X) < ∞ and Y has density g(y), then show that the ratio X/Y has finite expectation if and only if
=+17. Validate the formulas for the distribution and density functions of the product XY and the ratio X/Y of independent random variables X and Y > 0 given in Section 1.6. (Hint: Mimic the arguments
=+16. Suppose Y has exponential density e−y with unit mean. Given Y , let a point X be chosen uniformly from the interval [0, Y ]. Show that X has density E1(x) = ∞x e−yy−1 dy and
=+15. Let Sn = X1+···+Xn be the sum of n independent random variables, each distributed uniformly over the set {1, 2,...,m}. For example, imagine tossing an m-sided die n times and recording the
=+14. Suppose X and Y are independent random variables with finite variances. Define Z to be either X or Y depending on the outcome of a coin toss. In other words, set Z = X with probability p and Z
=+13. If X and Y are independent random variables with finite variances, then show that Var(XY ) = Var(X) Var(Y ) + E(X)2 Var(Y ) + E(Y )2 Var(X).
=+12. Prove the two conditioning formulas in equation (1.11) for calculating variances and covariances.
=+11. Suppose X has a continuous, strictly increasing distribution function F(x) and Y = −X has distribution function G(y). Show that X is symmetrically distributed around some point μ if and only
=+10. Let the random variable X have symmetric density f(x) = f(−x).Prove that the corresponding distribution function F(x) satisfies the identity a−a F(x) dx = a for all a ≥ 0 [183].
=+9. Let the random variable X have distribution function F(x). Demonstrate that E{h[F(X)]} = 1 0h(u) du for any integrable function h(u) on [0, 1].22 1. Basic Notions of Probability Theory
=+8. Discuss how you would use the inverse method of Example 1.5.1 to generate a random variable with (a) the continuous logistic density f(x|μ, σ) = e− x−μσσ[1 + e− x−μσ ]2,(b) the
=+7. Consider a sequence X1, X2,... of independent random variables that are exponentially distributed with mean 1. Show that 1 = lim sup n→∞Xn ln n 1 = lim sup n→∞Xn − ln n ln ln n 1 = lim
=+6. Use Problem 5 to prove that the pattern SFS of a success, failure, and success occurs infinitely many times in a sequence of Bernoulli trials. This result obviously generalizes to more complex
=+5. Consider a sequence of independent events A1, A2,... satisfying∞i=1 Pr(Ai) = ∞.As a partial converse to the Borel-Cantelli lemma, prove that infinitely many of the Ai occur. (Hints:
=+4. Suppose Xn is a sequence of nonnegative random variables that converges pointwise to the random variable X. If X is integrable, then Scheffe’s lemma declares that limn→∞ E(|Xn − X|) = 0
=+3. Suppose A, B, and C are three events with Pr(A ∩ B) > 0. Show that A and C are conditionally independent given B if and only if the Markov property Pr(C | A ∩ B) = Pr(C | B) holds.
=+2. The symmetric difference AB of two events A and B is defined as(A ∩ Bc) ∪ (Ac ∩ B). Show that AB has indicator |1A − 1B|. Use this fact to prove the triangle inequality Pr(AC) ≤
=+1. Let Ω be an infinite set. A subset S ⊂ Ω is said to be cofinite when its complement Sc is finite. Demonstrate that the family of subsets F = {S ⊂ Ω : S is finite or cofinite}is not a
2.16 (a) For the situation of Example 2.8, verify that δ(x) = x/n is a generalized Bayes estimator.(b) If X ∼ N(0, 1) and L(θ,δ)=(θ − δ)2, show that X is generalized Bayes under the improper
2.15 Let X ∼ N(θ , 1) and L(θ,δ)=(θ − δ)2.(a) Show that X is the limit of the Bayes estimators δπn , where πn is N(0, 1). Hence, X is both generalized Bayes and a limit of Bayes
2.14 Verify the Bayes estimator (2.2.14).
2.13 (a) In Example 2.7, obtain the Jeffreys prior distribution of (σ, τ ).(b) Show that for the prior of part (a), the posterior distribution of (σ, τ ) is proper.
2.12 For the density (2.2.13) and improper prior (dσ/σ) · (dσA/σA), show that the posterior distribution of (σ, σA) continues to be improper.
2.11 Let X and Y be independently distributed according to distributions Pξ and Qη, respectively. Suppose that ξ and η are real-valued and independent according to some prior distributions and
2.9, show that X¯ is the Bayes estimator under every even loss function.
2.10 Rukhin (1978) investigates the situation when the Bayes estimator is the same for every loss function in a certain set of loss functions, calling such estimators universal Bayes estimators. For
2.9 In Example 2.6, show that the posterior distribution of θ is symmetric about x¯ when the joint prior of θ and σ is of the form h(σ)dσ dθ, where h is an arbitrary probability density on
2.8 In Example 2.6 with α = g = 0, show that the posterior distribution given the X’s of√n(θ − X¯ )/√Z/(n − 1) is Student’s t-distribution with n − 1 degrees of freedom.
2.7 In Example 2.6, verify that the posterior distribution of τ is H(r +g −1/2, 1/(α +z)).
2.6 Verify the estimator (2.2.10).
2.5 DasGupta (1994) presents an identity relating the Bayes risk to bias, which illustrates that a small bias can help achieve a small Bayes risk. Let X ∼ f (x|θ) and θ ∼ π(θ).The Bayes
2.4 Bickel and Mallows (1988) further investigate the relationship between unbiasedness and Bayes, specifying conditions under which these properties cannot hold simultaneously. In addition, they
2.3 Show that the estimator (2.2.4) tends in probability (a) to θ as n → ∞, (b) to µ as b → 0, and (c) to θ as b → ∞.
2.2 For the situation of Example 2.1, Lindley and Phillips (1976) give a detailed account of the effect of stopping rules, which we can illustrate as follows. Let X be the number of successes in n
2.1 Referring to Example 1.5, suppose that X has the binomial distribution b(p, n) and the family of prior distributions for p is the family of beta distributions B(a, b).(a) Show that the marginal
1.12 Solve the problems analogous to Problems 1.9 and 1.10 when the observations consist of a single random variable X having a negative binomial distribution N b(p, m), p has the beta prior B(a, b),
1.11 In Problem 1.9, if λ has the improper prior density dλ/λ (corresponding to α = g =0), under what circumstances is the posterior distribution proper?
1.10 For the situation of the preceding problem, solve the two parts corresponding to Problem 1.5(a) and (b).
1.9 Let X1,...,Xn be iid according to the Poisson distribution P(λ) and let λ have a gamma distribution H(g, α).(a) For squared error loss, show that the Bayes estimator δα,g of λ has a
1.8 In analogy with Problem 1.2, determine the possible shapes of the gamma density H(g, 1/α), α, g > 0.
1.7 For the situation of Example 1.5, the UMVU estimator of p(1 − p) is δ = [x(x −1)]/[n(n − 1)] (see Example 2.3.1 and Problem 2.3.1).(a) Compare the estimator δ of Problem 1.6 with the
1.6 In Example 1.5, find the Bayes estimator δ of p(1−p) when p has the prior B(a, b).
1.5 For the estimator δ of Problem 1.4,(a) calculate the bias and maximum bias;(b) calculate the expected squared error and compare it with that of the UMVU estimator.
1.4 In Example 1.5, find the Jeffreys prior for p and the associated Bayes estimator δ.
1.3 In Example 1.5, if p has the improper prior density 1 p(1−p) , show that the posterior density of p given x is proper, provided 0
1.2 Give examples of pairs of values (a,b) for which the beta density B(a,b) is (a)decreasing, (b) increasing, (c) increasing for pp0, and(d) decreasing for pp0.
1.1 Verify the expressions for π(λ| ¯x) and δk (x¯) in Example 1.3.
7.17 Show that (7.17) holds if and only if δ depends only on X, defined by (7.18).
7.16 For cluster sampling with unequal cluster sizes Mi, Problem 7.14 provides an alternative estimator of a¯, with Mi in place of bi. Show that this estimator reduces to Y¯ if b1 = ··· = bN and
7.15 In connection with cluster sampling, consider a set W of vectors (a1,...,aM )and the totality G of transformations taking (a1,...,aM ) into (a1,...,aM ) such that(a1,...,aM ) ∈ W and ai
7.14 Under the assumptions of Problem 7.13, if B = b1 +···+bN is known, an alternative unbiased estimator a¯ is1 nn i=1 Yi Zi b¯ + n(N − 1)(n − 1)NY¯ −1 nn i=1 Yi Zi Z¯.[Hint: Use
7.13 Suppose that an auxiliary variable is available for each element of the population(7.2) so that θ = {(1, a1, b1),..., (N,aN , bN )}. If Y1,...,Yn and Z1,...,Zn denote the values of a and b
7.12 For sampling designs where the inclusion probabilitiesπi = s:i∈s P(s) of including the ith sample value Yi is known, a frequently used estimator of the population total is the
7.11 The approximate variance (7.16) for stratified sampling with a total sample size n = n1 + ··· + ns is minimized when ni is proportional to Niτi.
7.10 Let Vp be the exact variance (7.15) and Vr the corresponding variance for simple random sampling given by (7.6) with n = ni, N = Ni, ni/n = Ni/N and τ 2 =(aij − a··)2/N.(a) Show that Vr
7.9 Show that the approximate variance (7.16) for stratified sampling with ni = nNi/N(proportional allocation) is never greater than the corresponding approximate varianceτ 2/n for simple random
7.8 Prove Theorem 7.7.
7.7 In simple random sampling, with labels discarded, show that a necessary condition for h(a1,...,aN ) to be U-estimable is that h is symmetric in its N arguments.
7.6 For the situation of Example 7.4, assuming that (a) and (b) hold:(a) Show that aˆ of (7.9) is UMVUE for a¯.(b) Defining S2 = νi=1(Yi − Y¯)/(ν − 1), show thatσˆ 2 = S2 − MS[v]
7.5 Random variables X1,...,Xn are exchangeable if any permutation of X1,..., Xn has the same distribution.(a) If X1,...,Xn are iid, distributed as Bernoulli (p), show that given n 1 Xi =t,X1,...,Xn
7.4 For the situation of Example 7.4:(a) Show that EY¯ν−1 = E[ 1ν−1ν−1 1 Yi] = a¯.(b) Show that [ 1ν−1 − 1 N ] 1ν−2ν−1 1 (Yi − Y¯ν−1)2 is an unbiased estimator of
7.3 Verify equations (a) (7.6), (b) (7.8), and (c) (7.13).
7.2 If Y1,...,Yn are the sample values obtained in a simple random sample of size n from the finite population (7.2), then (a) E(Yi) = a¯, (b) var(Yi) = τ 2, and (c) cov(Yi, Yj ) =−τ 2/(N − 1).
7.1 (a) Consider a population {a1,...,aN } with the parameter space defined by the restriction a1 + ··· + aN = A (known). A simple random sample of size n is drawn in order to estimate τ 2.
6.8 Let N be an integer-valued random variable with distribution Pθ (N = n) = Pθ (n), n = 0,..., for which N is complete. Given N = n, let X have the binomial distribution b(p, n) for n > 0, with p
6.7 Instead of a sample of fixed size n in the preceding problem, suppose the observations consist of all robberies taking place within a given time period, so that n is the value taken on by a
6.6 A city has been divided into I major districts and the ith district into Ji subdistricts, all of which have populations of roughly equal size. From the police records for a given year, a random
6.5 An application of log linear models in genetics is through the Hardy-Weinberg model of mating. If a parent population contains alleles A, a with frequencies p and 1 − p, then standard random
6.4 Show that the distribution of the preceding problem also arises in Example 6.1 when the n subjects, rather than being drawn from the population at large, are randomly drawn:n1+ from Category
6.3 In Example 6.1, show that the conditional distribution of the vectors (ni1,..., niJ )given the values of ni+ (i = 1,...,I ) is that of I independent vectors with multinomial distribution
6.2 In Example 6.2, show that the conditional independence of A, B givenC is equivalent to αABC ijk = αAB ij = 0 for all i, j , and k.
6.1 In Example 6.1, show that γij = 0 for all i, j is equivalent to pij = pi+p+j . [Hint:γij = ξij − ξi· − ξ·j + ξ·· = 0 implies pij = aibj and hence pi+ = cai and p+j = bj /c for
5.17 For the situation of Example 5.5, relax the assumption of normality to only assume that Ai and Uij have zero means and finite second moments. Show that among all linear estimators (of the form
5.16 Consider a nested three-way layout with Xijkl = µ + αi + bij + cijk + Uijkl(i = 1,...,I ; j = 1,...,J ; k = 1,...,K; l = 1,...,n) in the versions(a) ai = αi, bij = βij , cijk = γijk ;(b) ai
5.15 A general class of models containing linear models of Types I and II, and mixed models as special cases assumes that the 1 × n observation vector X is normally distributed with mean θA as in
5.14 In Example 5.4:(a) Give a transformation taking the variables Xijk into the Wijk with density (5.11).(b) Obtain the UMVU estimators of µ, αi, σ2 B, and σ2.
5.13 In Example 5.3, obtain the UMVU estimators of σ2 A and σ2 when σ2 B = 0 so that the B terms in (5.8) drop out, and compare them with those of Problem 5.12.
5.12 In Example 5.3, obtain the UMVU estimators of the variance components σ2 A, σ2 B, and σ2.
5.11 For the Xijk given in (5.8), determine a transformation taking them to variables Zijk with the distribution stated in Example 5.3.
5.10 In Example 5.2, obtain the UMVU estimators of the variance components σ2 A, σ2 B, and σ2 when σ2 C = 0, and compare them to those obtained without this assumption.
5.9 In Example 5.2, define a linear transformation of the Xijk leading to the joint distribution of the Zijk stated in connection with (5.6), and verify the complete sufficient statistics (5.7).
5.8 Modify the car illustration of Example 5.1 so that it illustrates (5.5).
5.7 The following problem shows that in Examples 5.1–5.3 every unbiased estimator of the variance components (except σ2) takes on negative values. (For some related results, see Pukelsheim
5.6 In the preceding problem, calculate values of P(σˆ 2 A < 0) for finite n. When would you expect negative estimates to be a problem? [The probability P(σˆ 2 A < 0), which involves an F random
5.5 In the balanced one-way layout of Example 5.1, determine lim P(σˆ 2 A < 0) as n → ∞for σ2 A/σ2 = 0, 0.2, 0.5 , 1, and s = 3, 4, 5, 6. [Hint: The limit of the probability can be expressed
5.4 Obtain the joint density of the Xij in Example 5.1 in the unbalanced case in which j = 1,...,ni, with the ni not all equal, and determine a minimal set of sufficient statistics(which depends on
5.3 Verify the UMVU estimator of σ2 A/σ2 given in Example 5.1.
5.2 Let A = (aij ) be a nonsingular n × n matrix with aii = a and aij = b for all i = j .Determine the elements of A−1. [Hint: Assume that A−1 = (cij ) with cii = c and cij = d for all i = j ,
5.1 In Example 5.1:(a) Show that the joint density of the Zij is given by (5.2).(b) Obtain the joint multivariate normal density of the Xij directly by evaluating their covariance matrix and then
4.21 Determine which of the following are contrasts:(a) The regression coefficients α, β, or γ of (4.2).(b) The parameters µ, αi, βj , or γij of (4.27).(c) The parameters µ or αi of (4.23)
4.20 In the linear model (4.4), a function ciξi with ci = 0 is called a contrast. Show that a linear function diξi is a contrast if and only if it is translation invariant, that is, satisfies
4.19 (a) Under the assumptions of Example 4.15, find the variance of λiS2 i .(b) Show that the variance of (a) is minimized by the values stated in the example.
4.18 The proof of Theorem 4.14(c) is based on two results. Establish that:(a) For large values of θ, the unconditional variance of a linear unbiased estimator will be greater than that of the least
4.17 A generalization of the order statistics, to vectors, is given by the following definition.Definition 8.1 The cj -order statistics of a sample of vectors are the vectors arranged in increasing
4.16 (a) Show that under assumptions (4.35), if ξ = θA, then the least squares estimate of θ is xA(AA)−1.(b) If (X, A) is multivariate normal with all parameters unknown, show that the least
4.15 In the preceding problem, if it is known that the λ’s are zero, determine whether the UMVU estimators of the remaining parameters remain unchanged.
4.14 Extend the results of the preceding problem to the modelξijk = µ + αi + βj + γk + δij + εik + λjk where iδij =jδij =iεik =kεik =jλjk =kλjk = 0.
4.13 Let Xijk (i = 1,...,I , j = 1,...,J , k = 1,...,K) be N(ξijk , σ2) withξijk = µ + αi + βj + γk where αi = βj = γk = 0. Express µ, αi, βj , and γk in terms of the ξ ’s and
4.12 (a) Show how the decomposition in Problem 4.11(a) must be modified when it is known that the γij are zero.(b) Use the decomposition of (a) to solve Problem 4.10.
Showing 900 - 1000
of 6259
First
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Last