All Matches
Solution Library
Expert Answer
Textbooks
Search Textbook questions, tutors and Books
Oops, something went wrong!
Change your search query and then try again
Toggle navigation
FREE Trial
S
Books
FREE
Tutors
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Hire a Tutor
AI Study Help
New
Search
Search
Sign In
Register
study help
business
theory of probability
Questions and Answers of
Theory Of Probability
=+33. As another example of a strong uniform time, consider the inverse shuffling method of Reeds [49]. At every shuffle we imagine that each of c cards is assigned independently and uniformly to a
=+If Uk ≤ pYn , then declare the gene to be an a1 allele in the Y process.Why does this imply that Xn+1 ≤ Yn+1? Once Xn = Yn for some n, they stay coupled. In view of Problem 27, supply the
=+(d) When u + v = 1, the chain can be irreversible. For a counterexample, choose m = 1 and consider the path 0 → 1 → 2 → 0 and its reverse. Show that the circulation criterion can
=+Also prove:7.9 Problems 181(a) The pi are increasing in i provided u + v ≤ 1.(b) The chain is ergodic.(c) When u + v = 1, the chain is reversible with equilibrium distributionπj =2m
=+32. Suppose in the Wright-Fisher model of Example 7.3.2 that each sampled a1 allele has a chance of u of mutating to an a2 allele and that each sampled a2 allele has a chance of v of mutating to
=+2 . The rate of convergence to this equilibrium distribution can be understood by constructing a strong stationary time. As each molecule is encountered, check it off the list of molecules. Let T
=+31. A simple change of Ehrenfest’s Markov chain in Example 7.3.3 renders it ergodic. At each step of the chain, flip a fair coin. If the coin lands heads, switch the chosen molecule to the other
=+30. Suppose the integer-valued random variables U1, U2, V1, and V2 are such that U1 and U2 are independent and V1 and V2 are independent.Demonstrate thatπU1+U2 − πV1+V2 TV ≤ πU1 − πV1
=+29. Let X have a Bernoulli distribution with success probability p and Y a Poisson distribution with mean p. Prove the total variation inequalityπX − πY TV ≤ p2 (7.25)involving the
=+28. Show that the two definitions of the total variation norm given in equation (7.6) coincide.
=+27. Suppose the integer-valued random variable Y stochastically dominates the integer-valued random variable X. Prove the boundπX − πY TV ≤ E(Y ) − E(X)by extending inequality (7.7).
=+26. Suppose that the random variable Y stochastically dominates the random variable X and that f(u) is an increasing function of the real variable u. In view of Problem 24, prove that E[f(Y )] ≥
=+25. Continuing Problem 24, suppose that X1, X2, Y1, and Y2 are random variables such that Y1 dominates X1, Y2 dominates X2, X1 and X2 are independent, and Y1 and Y2 are independent. Prove that Y1 +
=+24. The random variable Y stochastically dominates the random variable X provided Pr(Y ≤ u) ≤ Pr(X ≤ u) for all real u. Using quantile coupling, we can construct on a common probability space
=+23. Let X1 follow a beta distribution with parameters α1 and β1 and X2 follow a beta distribution with parameters α2 and β2. If α1 ≤ α2 andα1 + β1 = α2 + β2, then demonstrate that Pr(X1
=+22. Let Y follow a negative binomial distribution that counts the number of failures until n successes. Demonstrate by a coupling argument that Pr(Y ≥ k) is decreasing in the success probability
=+21. Let Y be a Poisson random variable with mean λ. Demonstrate that Pr(Y ≥ k) is increasing in λ for k fixed. (Hint: If λ1 < λ2, then construct coupled Poisson random variables Y1 and Y2
=+20. Let X be a binomially distributed random variable with n trials and success probability p. Show by a coupling argument that Pr(X ≥ k)is increasing in n for fixed p and k and in p for fixed n
=+k. (Hint: Consider an urn with r red balls, 1 white ball, and n−r −1 black balls. If we draw m balls from the urn without replacement, then X is the number of red balls drawn, and Y is the
=+Let Y follow the same hypergeometric distribution except that r + 1 replaces r. Give a coupling proof that Pr(X ≥ k) ≤ Pr(Y ≥ k) for all
=+coupled walks Xn and Yn based on q and q∗ such that X0 = Y0 = i and such that at the first step Y1 ≤ X1. This requires coordinating the first step of each chain. If X1 > Y1, then run the Xn
=+18. Consider a random walk on the integers 0,...,m with transition probabilities pij =* qi j = i − 1 1 − qi j = i + 1 for i = 1,...,m − 1 and p00 = pmm = 1. All other transition
=+17. Consider a random graph with n nodes. Between every pair of nodes, independently introduce an edge with probability p. If c(p) denotes the probability that the graph is connected, then it is
=+16. In Example 7.4.1, suppose that f(x) is strictly increasing and g(x)is increasing. Show that Cov[f(X), g(X)] = 0 occurs if and only if Pr[g(X) = c] = 1 for some constantc. (Hint: For necessity,
=+2 . It is interesting that E[(X)k] = E[(Y )k] for k
=+2 . If s is even and X is concentrated on the even integers, then show that X has falling factorial moments E[(X)k] =⎧⎨⎩(b)k 2k 0 ≤ kb.If s is even and X is concentrated on the odd
=+when s is even, suppose that X follows the equilibrium distribution π. If X is concentrated on the even integers, then it has generating function E(uX) = 1 2 +u 2b+1 2 − u 2b, and if X is
=+(e) Verify that the unique stationary distribution π of the chain has entriesπj =b j2b or πj =b j2b−1 .(Hints: Check detailed balance. For the normalizing constant
=+...v7.9 Problems 177 states communicate. (Hints: First, show that it is possible to pass in a finite number of steps from any state j to some state k with k ≤ s. Second, show that it suffices to
=+(d) If s is odd, then prove that all states communicate. If s is even, then prove that all even states communicate and that all odd
=+(c) Verify the following behavior. If s is an even integer and X0 is even, then all subsequent Xn are even. If s is an even integer and X0 is odd, then all subsequent Xn are odd. If s is an odd
=+(b) Demonstrate that the transition probability matrix has entries pjk = Pr(Xn+1 = k | Xn = j) =j ib−j s−i bs.where i = (s + j − k)/2 must be an integer. Note that pjk > 0 if and only if pkj
=+(a) Show that the stochastic process Xn is a Markov chain. What is the state space? (Hint: You may want to revise your answer after considering question (c).)
=+15. Consider a set of b light bulbs. At epoch n, a random subset of s light bulbs is selected. Those bulbs in the subset that are on are switched off, and those bulbs that are off are switched on.
=+How many neighbors does a permutation σ possess? Show how the set of permutations can be made into a reversible Markov chain using the construction of Example 7.3.1. Is the underlying graph
=+14. Consider the n! different permutations σ = (σ1,...,σn) of the set{1,...,n} equipped with the uniform distribution πσ = 1/n! [49].Declare a permutation ω to be a neighbor of σ if there
=+13. A random walk on a connected graph has equilibrium distributionπv = d(v)2m , where d(v) is the degree of v and m is the number of edges. Let tuv be the expected time the chain takes in
=+12. In Example 7.3.1, show that the chain is aperiodic if and only if the underlying graph is not bipartite.176 7. Discrete-Time Markov Chains
=+11. In the Bernoulli-Laplace model, we imagine two boxes with m particles each. Among the 2m particles there are b black particles and w white particles, where b + w = 2m and b ≤ w. At each
=+10. Show that Kolmogorov’s criterion (7.3) implies that definition (7.4)does not depend on the particular path chosen from i to j.
=+2P n, then prove that π = πQ and μ = μQ. Furthermore, prove that strict inequality holds in the inequalityπ − μTV = 1 2il(πl − μl)qli≤1 2l|πl − μl|i qli= π − μTV.This
=+9. Demonstrate that an irreducible Markov chain possesses at most one equilibrium distribution. This result applies regardless of whether the chain is finite or aperiodic. (Hints: Let P = (pij ) be
=+8. The transition matrix P of a finite Markov chain is said to be doubly stochastic if each of its column sums equals 1. Find an equilibrium distribution in this setting. Prove that symmetric
=+7. Suppose an irreducible Markov chain has periodd. Show that the states of the chain can be divided into d disjoint classes C0,...,Cd−1 such that pij = 0 unless i ∈ Ck and j ∈ Cl for l = k +
=+6. Prove that every state of an irreducible Markov chain has the same period.
=+5. Consider the Cartesian product state space A × B, where A = {0, 1,...,a − 1}, B = {0, 1,...,b − 1}, and a and b are positive integers. Define a Markov chain that moves from (x, y) to (x+1
=+4. Demonstrate that a finite-state Markov chain is ergodic (irreducible and aperiodic) if and only if some power P n of the transition matrix P has all entries positive. (Hints: For sufficiency,
=+3. Suppose you repeatedly throw a fair die and record the sum Sn of the exposed faces after n throws. Show that limn→∞ Pr(Sn is divisible by 13) = 1 13 by constructing an appropriate Markov
=+2. A drunken knight is placed on an empty chess board and randomly moves according to the usual chess rule. Calculate the equilibrium distribution of the knight’s position [152]. (Hints:
=+1. Take three numbers x1, x2, and x3 and form the successive running averages xn = (xn−3 + xn−2 + xn−1)/3 starting with x4. Prove that limn→∞ xn = x1 + 2x2 + 3x3 6 .
=+30. Consider a homogeneous Poisson process Nt on [0, ∞) with intensityλ. Assign to the ith random point a real mark Yi drawn independently from a density p(y) that does not depend on the
=+29. A random variable S is said to be infinitely divisible if for every positive integer n there exist independent and identically distributed random variables X1,...,Xn such that the sum
=+28. If f(x) is a simple function and Π is a Poisson process with intensity function λ(x), then demonstrate formula (6.17) for the characteristic function of the random sum S.
=+27. Claims arrive at an insurance company at the times T of a Poisson process with constant intensity λ on [0, ∞). Each time a claim arrives, the company pays S dollars, where S is independently
=+26. A train departs at time t > 0. During the interval [0, t], passengers arrive at the depot at times T determined by a Poisson process with constant intensity λ. The total waiting time
=+25. A one-way highway extends from 0 to ∞. Cars enter at position 0 at times s determined by a Poisson process on [0, t] with constant intensity λ. Each car is independently assigned a velocity
=+24. Consider a homogeneous Poisson process with intensity λ on the set{(x, y, t) ∈ R3 : t ≥ 0}. The coordinate t is considered a time coordinate and the coordinates x and y spatial
=+23. Continuing Problem 22, perform the same analysis in three dimensions for spheres. Conclude that the number of random spheres that overlap the origin is Poisson with mean 4λπ3 ∞0 r3g(r) dr
=+6.10 Problems 149 random points so generated constitute a Poisson process with intensityη(u)=2πλ ∞0(r + u)+g(r) dr.Conclude from this analysis that the number of random circles that overlap
=+22. Suppose we generate random circles in the plane by taking their centers (x, y) to be the random points of a Poisson process of constant intensity λ. Each center we independently mark with a
=+21. The motivation for the negative-multinomial distribution comes from multinomial sampling with d + 1 categories assigned probabilities p1,...,pd+1. Sampling continues until category d + 1
=+20. In the family planning model of Example 6.6.2, let Msd be the number of children born when the family first attains either its quota of s sons or d daughters. Show that E(Msd) = E(min{Ts, Td})
=+19. In the family planning model of Example 2.3.3, we showed how to compute the probability Rsd that the couple reach their quota of s sons before their quota of d daughters. Deduce the formula Rsd
=+18. Prove the upper bound (6.11) by calculating E[f(N1,...,Nm)]. In the process, condition on the number of Poisson trials N, invoke the assumptions on f(x1,...,xm), and apply the bounds of Problem
=+17. Continuing Problem 16, show that the probability that exactly j boxes are empty ism j m−j k=0m − j k(−1)m−j−k k mn.
=+16. Suppose you randomly drop n balls into m boxes. Assume that a ball is equally likely to land in any box. Use Schr¨odinger’s method to prove that each box receives an even number of balls
=+15. Prove that the function ψ(r) = ln[cosh(r)] is even, strictly convex, infinitely differentiable, and asymptotic to |r| as |r|→∞.
=+14. Under the assumptions of Problem 13, demonstrate that the exact solution of the one-dimensional equation∂∂θj Q(θ | θn)=0 exists and is positive when i lijdi > i lijyi. Why would this
=+13. In the absence of a Gibbs smoothing prior, show that one step of Newton’s method leads to the approximate MM updateθn+1 j = θn ji lij [die−l tiθn(1 + l tiθn) − yi]i lij l
=+12. Show that the loglikelihood (6.5) for the transmission tomography model is concave. State a necessary condition for strict concavity in terms of the number of pixels and the number of
=+11. For a fixed positive integer n, we define the generalized hyperbolic functions [146] nαj (x) of x as the finite Fourier transform coefficients nαj (x) = 1 nn−1 k=0 exuk n u−jk n , where
=+10. Let X1, X2,... be an i.i.d. sequence of exponentially random variables with common intensity 1. The observation Xi is said to be a record value if either i = 1 or Xi > max{X1,...,Xi−1}. If Rj
=+9. In the context of Example 6.4.2, suppose you observe X(1),...,X(r)and wish to estimate λ−1 by a linear combination S = r i=1 αiX(i).Demonstrate that Var(S) is minimized subject to E(S) =
=+8. Let X1,...,Xn be independent exponentially distributed random variables with intensities λ1,...,λn. If λj = λk for j = k, then show that S = X1 + ··· + Xn has density f(t) = n j=1
=+7. Continuing Problem 6, prove that X(j) has distribution and density functions F(j)(x) = n k=jn k(1 − e−λx)ke−(n−k)λx f(j)(x) = nn − 1 j − 1(1 −
=+6. In the context of Example 6.4.2, show that the order statistics X(j)have means, variances, and covariances E(X(j)) = j k=1 1λ(n − k + 1)144 6. Poisson Processes Var(X(j)) = j k=1 1λ2(n −
=+5. Let X1,...,Xn be independent exponentially distributed random variables with common intensity λ. Define the order statistics X(i)and the increments Zi = X(i) − X(i−1) and Z1 = X(1). Show
=+4. Let X1, Y1, X2, Y2,... be independent exponentially distributed random variables with mean 1. Define Nx = min{n : X1 + ··· + Xn > x}Ny = min{n : Y1 + ··· + Yn > y}.Demonstrate that Pr(Nx
=+3. Consider a Poisson process in the plane with constant intensity λ.Find the distribution and density function of the distance from the origin of the plane to the nearest random point. What is
=+2. Consider a Poisson distributed random variable X whose mean λ is a positive integer. Demonstrate that Pr(X ≥ λ) ≥1 2, Pr(X ≤ λ) ≥1 2 .(Hints: For the first inequality, show that Pr(X
=+1. Suppose the random variables X and Y have the joint probability generating function E(uXvY ) = eα(u−1)+β(v−1)+γ(uv−1)for positive constants α, β, and γ. Show that X and Y are Poisson
7.20 An entertaining (and unjustifiable) result which abuses a hierarchical Bayes calculation yields the following derivation of the James-Stein estimator. Let X ∼ Np(θ , I )and θ |τ 2 ∼ Np(0,
7.19 As noted by Morris (1983a), an analysis of variance-type hierarchical model, with unequal ni, will yield closed-form empirical Bayes estimators if the prior variances are proportional to the
7.18 (Hierarchical Bayes estimation in a general case.) In a manner similar to the previous problem, we can derive hierarchical Bayes estimators for the model X|ξ ∼ Ns(ξ, σ2 I ),ξ|β ∼
7.17 (Empirical Bayes estimation in a general case). A general version of the hierarchical models of Examples 7.7 and 7.8 is X|ξ ∼ Ns(ξ, σ2 I ),ξ|β ∼ Ns(Zβ, τ 2 I )where σ2 and Zs×r, of
7.16 Generalization of model (7.7.23) to the case of unequal ni is, perhaps, not as straightforward as one might expect. Consider the generalization Xij |ξi ∼ N(ξi, σ2), j = 1,...,ni, i = 1,...
7.15 The empirical Bayes estimator (7.7.27) can also be derived as a hierarchical Bayes estimator. Consider the hierarchical model Xij |ξi ∼ N(ξi, σ2), j = 1, . . . , n, i = 1,... , s,ξi|µ ∼
7.14 For the situation of Example 7.7:(a) Show how to derive the empirical Bayes estimator δL of (7.7.28).(b) Verify the Bayes risk of δL of (7.7.29).For the situation of Example 7.8:(c) Show how
7.13 Prove the following: Two matrix results that are useful in calculating estimators from multivariate hierarchical models are(a) For any vector a of the form a = (I − 1 s J )b, 1a = ai = 0.(b)
7.12 Consider a hierarchical Bayes estimator for the Poisson model (7.7.15) with loss(7.7.16). Using the distribution (5.6.27) for the hyperparameterb, show that the Bayes estimator is px¯ + α −
7.11 For the situation of Example 7.6, evaluate the Bayes risk of the empirical Bayes estimator (7.7.20) for k = 0 and 1. What values of the unknown hyperparameter b are least and which are most
7.10 For the situation of Example 7.6:(a) Show that the Bayes estimator under the loss Lk (λ, δ) of (7.7.16) is given by(7.7.17).(b) Verify (7.7.19) and (7.7.20).(c) Evaluate the Bayes risks r(0,
7.9 (a) For the model (7.7.15), show that the marginal distribution of Xi is negative binomial(a, 1/b + 1); that is, P(Xi = x) = a + x − 1 x b b + 1x 1 b + 1a with EXi = ab and var Xi =
7.8 Theorem 7.5 holds in greater generality than just the normal distribution. Suppose X is distributed according to the multivariate version of the exponential family pη(x) of(33.7), pη(x) =
7.7 For the model X|θ ∼ Np(θ , σ2 I ),θ |τ 2 ∼ N(µ, τ 2 I )the Bayes risk of the ordinary Stein estimatorδi(x) = µi +1 − (p − 2)σ2(xi − µi)2(xi − µi)
7.6 For the model X|θ ∼ Np(θ , σ2 I ),θ |τ 2 ∼ Np(µ, τ 2 I ) :Show that:(a) The empirical Bayes estimator, using an unbiased estimator of τ 2/(σ2 + τ 2), is the Stein estimatorδJS i
7.5 A general version of the empirical Bayes estimator (7.7.3) is given byδc(x) = 1 − cσ2|x|2x, where c is a positive constant.(a) Use Corollary 7.2 to verify that Eθ |θ − δc(X)|2 = pσ2 +
7.4 Verify (7.7.9), the expression for the Bayes risk of δτ0 . (Problem 3.12 may be helpful.)
7.3 The derivation of an unbiased estimator of the risk (Corollary 7.2) can be extended to a more general model in the exponential family, the model of Corollary 3.3, where X = X1,...,Xp has the
7.2 Establish Corollary 7.2. Be sure to verify that the conditions on g(x) are sufficient to allow the integration-by-parts argument. [Stein (1973, 1981) develops these representations in the normal
Showing 600 - 700
of 6259
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Last