All Matches
Solution Library
Expert Answer
Textbooks
Search Textbook questions, tutors and Books
Oops, something went wrong!
Change your search query and then try again
Toggle navigation
FREE Trial
S
Books
FREE
Tutors
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Hire a Tutor
AI Study Help
New
Search
Search
Sign In
Register
study help
business
theory of probability
Questions and Answers of
Theory Of Probability
7.1 For the situation of Example 7.1:(a) The empirical Bayes estimator of θ , using an unbiased estimate of τ 2/(σ2 + τ 2), isδEB =1 − (p − 2)σ2|x|2x, the James-Stein estimator.(b) The
6.21 (a) Establish (6.6.39) and (6.6.40) for the class of priors given by (6.6.38).(b) Show that the Bayes estimator based on π(θ) ∈ π in (6.6.38), under squared error loss, is given by
6.20 (a) Verify (6.6.37), that under squared error loss r(π, δ) = r(π, δπ ) + E(δ − δπ )2.(b) For X ∼ binomial(p, n), L(p, δ)=(p − δ)2, and π = {π : π = beta(a, b), a > 0, b > 0},
6.19 Apply the Laplace approximation (5.6.33) to the hierarchy of Example 6.7 and show that the resulting approximation to the hierarchical Bayes estimator is given by (5.6.32).
6.18 (a) Apply the Laplace approximation (5.6.33) to obtain an approximation to the hierarchical Bayes estimator of Example 6.6.
6.17 (a) Show that if b(·) has a bounded second derivative, thenb(λ)e−nh(λ)dλ = b(λˆ)6 2πnh(λˆ)e−nh(λˆ) + O 1 n3/2where h(λˆ) is the unique minimum of h(λ), h(λ) = 0, and
6.16 For the situation of Example 6.7:(a) Calculate the values of the approximation (5.6.32) for the values of Table 6.2. Are there situations where the estimator (5.6.32) is clearly preferred over
6.15 The Taylor series approximation to the estimator (5.5.8) is carried out in a number of steps. Show that:(a) Using a first-order Taylor expansion around the point x¯, we have 1(1 + θ
6.14 Referring to Example 6.6, show that the empirical Bayes estimator is also a hierarchical Bayes estimator using the prior γ (b)=1/b.
6.13 (a) For the hierarchy (5.5.7), with σ2 = 1 and p = 10, evaluate the Bayes risk r(π, δπ ) of the Bayes estimator (5.5.8) for ν = 2, 5, and 10.(b) Calculate the Bayes risk of the estimator
6.12 (a) Show that the empirical Bayes δEB(x¯) = (1 − σ2/ max{σ2, px¯ 2})x¯ of (6.6.5)has bounded mean squared error.(b) Show that a variation of δEB(x¯), of part (a), δv (x¯) = [1 −
6.11 For E(|x) of (5.5.8), show that as ν → ∞, E(|x) → [p/(p + σ2)]x¯, the Bayes estimator under a N(0, 1) prior.
6.10 Show for the hierarchy of Example 3.4, where σ2 and τ 2 are known butµis unknown, that:(a) The empirical Bayes estimator of θi, based on the marginal MLE of θi, is τ 2σ2+τ 2 Xi
6.9 Strawderman (1992) shows that the James-Stein estimator can be viewed as an empirical Bayes estimator in an arbitrary location family. Let Xp×1 ∼ f (x − θ), with EX = θand var X = σ2I .
6.8 For each of the following situations, write the empirical Bayes estimator of the natural parameter (under squared error loss) in the form (6.6.12), using the marginal likelihood estimator of the
6.7 (a) For pη(x) of (1.5.2), show that for any prior distribution π(η|λ) that is dependent on a hyperparameter λ, the empirical Bayes estimator is given by Es i=1ηi∂∂xj Ti(x)|x, λˆ=
6.6 Extend Theorem 6.3 to the case of Theorem 3.2; that is, if X has density (3.3.7) andη has prior density π(η|γ ), then the empirical Bayes estimator is Eηi∂Ti(x)∂xj|x, γˆ(x)=
6.5 Referring to Example 6.2:(a) Show that the Bayes risk, r(π, δπ ), of the Bayes estimator (6.6.7) is given by r(π, δπ ) = kE[var(pk |xk )] = kab(a + b)(a + b + 1)(a + b + n).(b) Show that
6.4 For the situation of Example 6.1:(a) Show that ∞−∞e−n/2σ2(x¯−θ)2 e(−1/2)θ2/τ 2 dθ =√2π σ2τ 2σ2 + τ 21/2 e(−n/2)x¯2/σ2+nτ 2 and, hence, establish (6.6.4).(b)
6.3 For the model (6.3.1), the Bayes estimator δλ(x) minimizes L(θ,d(x))×π(θ|x,λ) dθand the empirical Bayes estimator, δλˆ(x), minimizes L(θ,d(x))π(θ|x, λˆ(x)) dθ. Show that
6.2 This problem will investigate conditions under which an empirical Bayes estimator is a Bayes estimator. Expression (6.6.3) is a true posterior expected loss if π(θ|x, λˆ(x))is a true
6.1 For the model (3.3.1), show that δλ(x)|λ=λˆ = δλˆ(x), where the Bayes estimator δλ(x)minimizes L[θ,d(x)]π(θ|x,λ) dθ and the empirical Bayes estimator δλˆ(x) minimizes
5.21 Let F = {f (x|θ); θ ∈ } be a family of probability densities. The Kullback-Leibler information for discrimination between two densities in F can be writtenψ(θ1, θ2) = f (x|θ1) log f
5.20 Each of m spores has a probability τ of germinating. Of the r spores that germinate, each has probability ω of bending in a particular direction. If s bends in the particular direction, a
5.19 Goel and DeGroot (1981) define a Bayesian analog of Fisher information [see(2.5.10)] as I[π(θ|x)] = ∂∂x π(θ|x)π(θ|x)2 dθ, the information that x has about the posterior
5.18 The Kullback-Leibler information, K[f, g] (5.5.25), is not symmetric in f and g, and a modification, called the divergence, remedies this. Define J [f, g], the divergence between f and g, to be
5.17 The original proof of Theorem 5.7 (Goel and DeGroot 1981) used Renyi’s entropy ´function (Renyi 1961) ´Rα(f, g) =1α − 1 log f α(x)g1−α(x) dµ(x), where f and g are densities, µ is
5.16 Consider the normal hierarchical model X|θ1 ∼ n(θ1, σ2 1 ),θ1|θ2 ∼ n(θ2|σ2 2 ),...θk−1|θk ∼ n(θk , σ2 k )where σ2 1 , i = 1,...,k, are known.(a) Show that the posterior
5.15 Starting with a U(0, 1) random variable, the transformations of Problem 5.14 will not get us normal random variables, or gamma random variables with noninteger shape parameters. One way of doing
5.14 Starting from a uniform random variable U ∼ Uniform(0, 1), it is possible to construct many random variables through transformations.(a) Show that − log U ∼ exp(1).(b) Show that −n i=1
5.13 Show that for the hierarchy (5.5.1), the posterior distributions π(θ|x) and π(λ|x)satisfyπ(θ|x) = π(θ|x, λ)π(λ|x, θ) dλπ(θ|x) dθ,π(λ|x) = π(λ|x, θ)π(θ|x, λ)
5.12 For the situation of Example 5.6, show that(a) E1 MM i=1 i= E1 MM i=1 E(|x, τi),(b) var 1 MM i=1 i≥ var 1 MM i=1 E(|x, τi).(c) Discuss when equality might hold in (b). Can you give an
5.11 Successive substitution sampling can be implemented via the Gibbs sampler in the following way. From Problem 5.8(c), we want to calculate hM = 1 MM m=1 k(x|xmk ) =1 MM m=1fX|Y (x|y)fX|Y (y|xmk
5.10 A direct Monte Carlo implementation of substitution sampling is provided by the data augmentation algorithm (Tanner and Wong 1987). If we define hi+1(x) = fX|Y (x|y)fY |X(y|x) dyhi(x) dx,
5.9 To understand the convergence of the Gibbs sampler, let (X, Y ) ∼ f (x, y), and define k(x, x) = fX|Y (x|y)fY |X(y|x) dy.(a) Show that the function h∗(·) that solves h∗(x) = k(x,
5.8 The method of Monte Carlo integration allows the calculation of (possibly complicated) integrals by using (possibly simple) generations of random variables.(a) To calculate h(x)fX(x) dx, generate
5.7 Referring to Example 6.6:(a) Using the prior distribution for γ (b) given in (5.6.27), show that the mode of the posterior distribution π(b|x) is bˆ = (px¯ + α − 1)/(pa + β − 1), and
5.6 The one-way random effects model of Example 2.7 (see also Examples 3.5.1 and 3.5.5) can be written as the hierarchical model Xij |µ, αi ∼ N(µ + αi, σ2), j = 1, . . . , n, i = 1,... , s,αi
5.5 (a) Analogous to Problem 1.7.9, establish that for any random variable X, Y , and Z, cov(X, Y ) = E[cov(X, Y )|Z] + cov[E(X|Z), E(Y |Z)].(b) For the hierarchy Xi|θi ∼ f (x|θi), i = 1,... , p,
5.4 Albert and Gupta (1985) investigate theory and applications of the hierarchical model Xi|θi ∼ b(θi, n), i = 1,... , p, independent,θi|η ∼ beta[kη, k(1 − η)], k known,η ∼ Uniform(0,
5.3 For the model (3.3.3), show that:(a) The marginal prior of θ, unconditional on τ 2, is given byπ(θ) =H(a + 1 2 )√2π H(a)ba 11 b + θ2 2a+1/2 , which for a = ν/2 and b = 2/ν is
5.2 For the situation of Problem 5.1, show that:(a) E(θ|x) = E[E(θ|x,λ)];(b) var(θ|x) = E[var(θ|x,λ)] + var[E(θ|x,λ)];and hence that π(θ|x) will tend to have a larger variance than
5.1 For the model (3.3.1), let π(θ|x,λ) be a single-prior Bayes posterior and π(θ|x)be a hierarchical Bayes posterior. Show that π(θ|x) = π(θ|x,λ) · π(λ|x) dλ, whereπ(λ|x) = f
4.14 The four densities defining the measures ν of Problem 4.12 and 4.13 (dx, (1/y)dy,(1/y)dxdy, (1/y2)dxdy) are the only densities (up to multiplicative constants) for whichν has the stated
4.13 If the elements g ∈ G are pairs of real numbers (a, b), b > 0, corresponding to the transformations gx = a + bx, group composition by (1.4.8) is(a2, b2) · (a1, b1)=(a2 + a1b2, b1b2).Of the
4.12 A measure over a group G is said to be right invariant if it satisfies (Bg) = (B)and left invariant if it satisfies (gB) = (B). Note that if G is commutative, the two definitions agree.(a)
4.11 For the model (3.3.23), find a measure ν in the (ξ,τ ) plane which remains invariant under the transformations (3.3.24).The next three problems contain a more formal development of left- and
4.10 There is a correspondence between Haar measures and Jeffreys priors in the location and scale cases.(a) Show that in the location parameter case, the Jeffreys prior is equal to the invariant
4.9 If is a left-invariant measure over G, show that ∗ defined by ∗(B) = (B−1) is right invariant, where B−1 = {g−1 : g ∈ B}.[Hint: Express ∗(Bg) and ∗(B) in terms of .]
4.8 In Example 4.9, show that the estimatorτˆ(x) =1 vr f x1−u v ,..., xn−u vdvdu 1vr+1 f x1−u v ,..., xn−u vdvdu is equivariant under scale changes; that is, it satisfies τ¯(cx) =
4.7 For each of the situations of Problem 4.5:(a) Determine the measure over induced by the right-invariant Haar measure over G¯ ;(b) Determine the Bayes estimator with respect to the measure found
4.6 For each of the situations of Problem 4.5, determine the MRE estimator if the loss is squared error with a scaling that makes it invariant.
4.5 For each of the following situations, find a group G that leaves the model invariant and determine left- and right-invariant measures over G. The joint density of X =(X1,...,Xn) and Y =
4.4 The Bayes estimators of η and τ in Example 4.9 are given by (4.31) and (4.32).(Recall Corollary 1.2.)
4.3 The Bayes estimator of τ in Example 4.5 is given by (4.22).
4.2 The Bayes estimator of η in Example 4.7 is given by (4.22).
4.1 For the situation of Example 4.2:(a) Show that the Bayes rule under a beta(α, α) prior is equivariant.(b) Show that the Bayes rule under any prior that is symmetric about 1/2 is equivariant.
3.12 If X1,...,Xn are iid from a one-parameter exponential family, the Bayes estimator of the mean, under squared error loss using a conjugate prior, is of the form aX¯ + b for constants a and b.(a)
3.11 For the situation of Problem 3.9, if X1,...,Xn are iid as pη(x) and the prior is the conjugate π(η|k,µ), then the posterior distribution is π(η|k + n, kµ+nx¯k+n ).
3.10 For each of the following situations, write the density in the form (3.7), and identify the natural parameter. Obtain the Bayes estimator of A(η) using squared loss and the conjugate prior.
3.9 For the natural exponential family pη(x) of (3.3.7) and the conjugate prior π(η|k,µ)of (3.3.19) establish that:(a) E(X) = A(η) and var X = A(η), where the expectation is with respect to
3.8 (a) Use Stein’s identity (Lemma 1.5.15) to show that if Xi ∼ pηi(x) of (3.3.18), then Eη(−∇ log h(X)) =iηiEη∂∂Xj Ti(X).(b) If Xi are iid from a gamma distribution Gamma(a, b),
3.7 IfX has the distributionpθ (x) of (1.5.1) show that, similar to Theorem 3.2,E(T η(θ)) =∇ log mπ (x) − ∇ log h(x).
3.6 For the situation of Example 3.6:(a) Show that if δ is a Bayes estimator of θ, then δ = δ/σ2 is a Bayes estimator of η, and hence R(θ,δ) = σ4R(η, δ).(b) Show that the risk of the
3.5 (a) If Xi ∼ Gamma(a, b), i = 1,...,p, independent with a known, calculate−∇ log h(x) and its expected value.(b) Apply the results of part (a) to the situation where Xi ∼ N(0, σ2 i ), i =
3.4 Using Stein’s identity (Lemma 1.5.15), show that if Xi ∼ pηi(x) of (3.3.7), then Eη(−∇ log h(X)) = η, R(η, −∇ log h(X)) = p i=1 Eη− ∂2∂X2 ilog h(X).
3.3 (a) Prove Corollary 3.3.(b) Verify the calculation of the Bayes estimator in Example 3.4.
3.2 Let X1,...,Xn be iid from Gamma(a,b) where a is known.(a) Verify that the conjugate prior for the natural parameter η = −1/b is equivalent to an inverted gamma prior on b.(b) Using the prior
3.1 For the situation of Example 3.1:(a) Verify that the Bayes estimator will only depend on the data through Y = maxi Xi.(b) Show that E(|y,a, b) can be expressed as E(|y,a, b) =1 b(n + a −
=+21. Suppose m = max{k ≥ 1 : k2 < n}. Prove that √2/m ≤ 2/√n for all sufficiently large n
=+Use these inequalities to prove that the constant γ of Problem 19 satisfies ln(1/2) ≤ γ ≤ ln[1 − 1/(2m)]. (Hints: A random walk is selfavoiding if all its steps are in the positive
=+20. Continuing Problem 19, suppose the random walk is symmetric in the sense that qi = 1/(2m) if and only if i = 1. Prove that mn(2m)n ≤ pn ≤2m(2m − 1)n−1(2m)n .
=+19. A random walk Xn on the integer lattice in Rm is determined by the transition probabilities Pr(Xn+1 = j | Xn = i) = qj−i and the initial value X0 = 0. Let pn be the probability that Xi = Xj
=+18. Let f(t) be a nonnegative function on (0, ∞) with limt→0 f(t) = 0.If f(t) is subadditive in the sense that f(s + t) ≤ f(s) + f(t) for all positive s and t, then show that lim t↓0 f(t)t
=+17. Consider a family of subsets F of a set S with the property that each A ∈ F has exactly d elements. The family F is said to be 2-colorable if we can assign one of two colors, say black and
=+16. Consider a graph with m nodes and n edges. For any set of nodes S, let X(S) be the number of edges with exactly one endpoint in S. Show that maxS X(S) ≥ n/2. (Hints: Generate S randomly by
=+15. Exactly 10% of the surface of a sphere in R3 is colored black, and 90% is colored white. Show that it is possible to inscribe a cube in the sphere with all of its vertices colored white [80].
=+14. Let x be the standard Euclidean norm of a vector x ∈ Rn. A vector x with x = 1 is said to be a unit vector. For any sequence x1,...,xn of n unit vectors from Rn, it is possible to find n
=+13. Check that any five noncollinear points in the plane R2 determine at least one obtuse angle.
=+12. Consider the set of n × n matrices M whose entries are drawn independently and uniformly from the set {−1, 1}. Thus, each such matrix has probability 2−n2. Show that E(detM) = 0 and
=+11. Continuing Problem 10, let An be the number of ascents in a random permutation of {1,...,n}. Show that E(An) = n − 1 2Var(An) = n − 1 4 + 2(n − 2)1 6 − 1 221{n≥2}=* n+1 12 n ≥ 2 0
=+10. Euler’s combinatorial number enk denotes the number of permutations π of {1,...,n} with k ascents π(j) < π(j + 1). Prove that enk = en,n−1−k enk = (k + 1)en−1,k + (n −
=+9. Let Yn be the number of cycles in a random permutation. Demonstrate that Var(Yn) = n k=1 1k −n k=1 1k2≈ ln n + γ − π2 6 .(Hint: Equation (4.24) provides the generating function of Yn.)
=+8. Consider a probability model under which all partitions of a set with n elements are equally likely. Let Xn be the number of blocks in a random partition of the n-set. Show that E(Xn) = Bn+1
=+7. Continuing Problem 6, show that the Fibonacci sequence has generating function F(s) = ∞n=0 fnxn = 1 1 − x − x2 .Use this to prove the convolution identityn k=0 fkfn−k = (n + 1)fn+1 +
=+where Ci is the indicator of the event that a domino occupies squares i and i + 1. Similarly prove that Var(Dn) = 2 fnjk fjfkfn−4−j−k + E(Dn) − E(Dn)2.How can you use E(Dn) and Var(Dn) to
=+6. Read Example 4.2.4 on Fibonacci numbers. Consider a probability model whose sample space is the collection of tilings of a checkerboard 5.8 Problems 119 row by square pieces and dominoes. If we
=+Use the last identity in conjunction with Proposition 12.6.1 to show that Fn is approximately Poisson distributed with mean 1.
=+5. A sequence X1,...,Xn of independent random variables uniformly distributed over the set Sn = {1, 2,...,n} defines a random function from Sn into itself. Prove that the number Fn = n j=1 1{Xj=j}
=+4. Consider the uniform distribution pl = n−1 on an alphabet A with n letters. Let len(sl) be the number of bits in the bit string sl representing l under Huffman coding. If m = max{len(sl) : l
=+Now let Tnk denote the expected number of operations to find x(k), and put Tn = maxk Tnk. One can prove that Tn ≤ 4n. In view of the fact that it takes n − 1 comparisons to create the left and
=+3. Consider the problem of finding an order statistic x(k) from an unsorted array {x1,...,xn} of n distinct numbers. This can be accomplished in O(n) operations based on the quick sort strategy.
=+2. Show that the worst case of quick sort takes on the order of n2 operations.
=+1. What is the probability that a random permutation of n distinct numbers contains at least one preexisting splitter? What are the mean and variance of the number of preexisting splitters?
=+32. Givenn integers a1,...,an, demonstrate that there is some sum ki=j+1 ai that is a multiple of n. (Hint: Map each of the n+ 1 partial sums sj = j i=1 ai into its remainder after division by n.
=+31. Consider a graph with more than one node. Let di be the degree of node i. Prove that at least two di coincide.
=+30. Suppose n + 1 numbers are chosen from the set {1, 2,..., 2n}. Show that there is some pair having no common factor other than 1. Show that there is another pair such that one member of the pair
=+29. Five points are chosen from an equilateral triangle with sides of length 1. Demonstrate that there exist two points separated by a distance of at most 1/2.
=+28. Continuing Problem 27, show that cX has nth cumulant cnκn and that X +c has first cumulant κ1 +c and subsequent cumulants κn. If Y is independent of X and has nth cumulant ηn, then
=+where the sum ranges over solutions to the equations n m=1 mbm = n and n m=1 bm = k in nonnegative integers. In particular verify the relationshipsμ1 = κ1μ2 = κ2 + κ2 1μ3 = κ3 + 3κ1κ2 +
Showing 700 - 800
of 6259
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Last