All Matches
Solution Library
Expert Answer
Textbooks
Search Textbook questions, tutors and Books
Oops, something went wrong!
Change your search query and then try again
Toggle navigation
FREE Trial
S
Books
FREE
Tutors
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Hire a Tutor
AI Study Help
New
Search
Search
Sign In
Register
study help
business
theory of probability
Questions and Answers of
Theory Of Probability
=+(a) Show that the multivariate generating functions characterizing this process satisfy the differential equations∂∂tP1(t, z) = −λP1(t, z) + λ(1 − α)P1(t, z)2+λαP1(t, z)P2(t,
=+25. In the cancer model of Coldman and Goldie [41], cancer cells are of two types. Type 1 cells are ordinary cancer cells. Type 2 particles are cancer cells with resistance to an anti-cancer
=+24. Continuing Problem 22, consider a single-type process starting from a single particle. Suppose that the progeny generating function is∞d=0 pdzd = zk for some k ≥ 2 and the death intensity
=+23. Continuing Problem 22, consider calculation of the variance Var(Xt)for a single-type process starting from a single particle. Suppose that 9.9 Problems 243 the progeny generating function
=+where P(t, z)=[P1(t, z),...,Pn(t, z)]. (Hints: Write the generating function Pi(t, z) as the expectation E(zXt | X0 = ei), where ei is the usual ith unit vector. The ancestral particle either lives
=+22. Consider a multitype continuous-time branching process with n particle types. Let Xit be the number of particles of type i at time t.Section 9.6 derives a system of ordinary differential
=+(f) We can turn the approximate correspondence discussed in parts(b) and (c) around and seek to mimic a branching process by a birth-death process. The most natural method involves matching the
=+(e) Alternatively, we can view the mother particle as dying in one of two ways. Either it dies in the ordinary way at rate μi, or it disappears at a reproduction event and is replaced by an
=+(d) Explain in laymen’s terms the meaning of the ratio defining fij .
=+(b) If we delay all offspring until the moment of death, then we get a branching process approximation to the birth-death process.What is the death rate λi in the branching process approximation?
=+(a) Show that in a birth-death process, a particle of type i produces the count vector d = (d1,...,dn) of daughter particles with probability pid = μi(μi + βi)|d|+1 |d|d1 ...dn n k=1βdk ik
=+event generates one and only one daughter particle. Thus, in a birthdeath process each particle continually buds off daughter particles until it dies. In contrast, each particle of a multitype
=+21. In some applications of continuous-time branching processes, it is awkward to model reproduction as occurring simultaneously with death. Birth-death processes offer an attractive alternative.
=+20. In a certain species, females die with intensity μ and males with intensity ν. All reproduction is through females at an intensity of λ per female. At each birth, the mother bears a
=+Collecting the mi(t) and αi into row vectors m(t) and α, respectively, and the λj (fji − 1{j=i}) into a matrix Ω, show that m(t) = m(0)etΩ + αΩ−1(etΩ − I), assuming that Ω is
=+19. Consider a multitype branching process with immigration. Suppose that each particle of type i has an exponential lifetime with death intensity λi and produces on average fij particles of type
=+18. Consider a continuous-time branching process with two types. If the process is irreducible and has reproduction matrix F = (fij ), then demonstrate that comparison of the criterion R0 = f11 +
=+17. Let e(s) be the vector of extinction probabilities defined in Section 9.6. If the dominant eigenvalue ρ of the matrix Ω has row eigenvector wt with positive entries and norm w1 = 1, then
=+16. At an X-linked recessive disease locus, there are two alleles, the normal allele (denoted +) and the disease allele (denoted −). Construct a two-type branching process for carrier females
=+15. Yeast cells reproduce by budding. Suppose at each generation a yeast cell either dies with probability p, survives without budding with probability q, or survives with budding off a daughter
=+14. Branching processes can be used to model the formation of polymers[154]. Consider a large batch of identical subunits in solution. Each subunit has m > 1 reactive sites that can attach to
=+13. In a subcritical branching process with immigration, let Q(s) be the progeny generating function and R(s) the generating function of the number of new immigrants at each generation. If the
=+12. Suppose Xn denotes the number of particles in a branching process with immigration. Let μ be the mean number of progeny per particle and α the mean number of new immigrants per generation.
=+11. In a branching process, let R(s) = ∞k=0 rksk be the generating function of the total number of particles Y∞ over all generations starting 238 9. Branching Processes from a single
=+10. In a subcritical branching process, let T be the generation at which the process goes extinct starting from a single particle at generation 0. If sk = Pr(T ≤ k) and Q(s) is the progeny
=+9. Newton’s method offers an alternative method of finding the extinction probability s∞ of a supercritical generating function Q(s). Let s0 = 0 and t0 = 0 be the initial values in the
=+8. Let s∞ be the extinction probability of a supercritical branching process with progeny generating function Q(s) = ∞k=0 qksk. If the meanμ of Q(s) is fixed, then one can construct
=+7. Suppose Q(s) is a generating function with mean μ and variance σ2.If μ − 1 is small and positive, then verify that Q(s) has approximate extinction probability e−2(μ−1)/σ2. Show that
=+6. The generating function p 1−qs is an example of a fractional linear transformation αs+βγs+δ [93]. To avoid trivial cases where the fractional linear transformation is undefined or
=+5. Continuing Problem 36 of Chapter 7, let T = min{n : Sn = 0} be the epoch of the first visit to 0 given S0 = 1. Define Z0 = 1 and Zj =T−1 n=0 1{Sn=j, Sn+1=j+1}for j ≥ 1. Thus, Zj is the
=+4. Continuing Example 9.2.5, let P(s) be the generating function for the total number of carrier and normal children born to a carrier of the mutant gene. Express the progeny generating function
=+3. Consider a supercritical branching process Xn with progeny generating function Q(s) and extinction probability s∞. Show that Pr(1 ≤ Xn ≤ k)sk∞ ≤ Q n(s∞)for all k ≥ 1 and that
=+2. Let Xn be the number of particles at generation n in a supercritical branching process with progeny mean μ and variance σ2. If X0 = 1 and Zn = Xn/μn, then find limn→∞ E(Zn) and limn→∞
=+1. If p and α are constants in the open interval (0, 1), then show that Q(s)=1 − p(1 − s)α is a generating function with nth functional iterate Qn(s)=1 − p1+α+···+αn−1(1 −
=+In particular for a time-homogeneous Kendall process with X0 = 0, show that E(Yt) = νt(α − μ)2e(α−μ)t − 1− να − μ.Under the same circumstances, calculate Var(Yt) using Problems
=+33. Consider the time averages Yt = 1 t t 0Xs ds of a nonnegative stochastic process Xt with finite means and variances. Prove that E(Yt) = 1 t t 0E(Xs) ds Var(Yt) = 2 t2 t 0 r 0Cov(Xs, Xr) ds
=+32. Consider a time-homogeneous Kendall process with no immigration.Show that the generating function G(s, t) of Xt satisfies the limit lim t→∞G(s, t) − G(0, t)1 − G(0, t) = s(μ − α)μ
=+31. Continuing Problem 30, prove that the equilibrium distribution π has jth componentπj =1 − αμν/α− αμj− ναj.Do this by expanding the generating function on the right-hand side
=+30. In the homogeneous version of Kendall’s process, show that the generating function G(s, t) of Xt satisfies lim t→∞ G(s, t) =1 − αμν/α1 − αsμν/α (8.22)when α
=+29. Continuing Problem 28, demonstrate that Cov(Xt2 , Xt1 ) = e(α−μ)(t2−t1) Var(Xt1 )for 0 ≤ t1 ≤ t2. (Hints: First show that Cov(Xt2 , Xt1 ) = Cov[E(Xt2 | Xt1 ), Xt1 ].Then apply Problem
=+28. In the homogeneous version of Kendall’s process, show that Var(Xt) = ν(α − μ)2αe(α−μ)t − μe(α−μ)t − 1+i(α + μ)e(α−μ)tα − μe(α−μ)t − 1when X0 = i.
=+27. Prove that G(s, t) defined by equation (8.18) satisfies the partial differential equation (8.16) with initial condition G(s, 0) = s andν(t) = 0.
=+See the reference [182] for a list of biological applications and further theory. (Hint: If a particle arrives at site 0 at time X and ultimately reaches site n, then mark it by the corresponding
=+26. On the lattice {0, 1, 2,...,n}, particles are fed into site 0 according to a Poisson process with intensity λ. Once on the lattice a particle hops one step to the right with intensity β and
=+25. Consider a pure birth process Xt with birth intensity λj when Xt = j.Let Tj denote the waiting time for a passage from state j to state j + 1. The random variable T = j Tj is the time
=+24. Cars arrive at an auto repair shop according to a Poisson process with intensity λ. There are m mechanics on duty, and each takes an independent exponential length of time with intensity μ to
=+If we mark each X by the pair (Y,Z), then we get a marked Poisson process on R3. Here we suppose for the moment that all healthy people get sick before they eventually die of the given disease.
=+23. The equilibrium distribution of the numbers (M,N) of healthy and sick people in Example 8.5.3 can be found by constructing a marked Poisson process. The time X at which a random person enters
=+22. Apply Problem 21 to the hemoglobin model in Example 8.5.1 with the understanding that the attachment sites operate independently with the same rates. What are the particles? How many states can
=+21. Let n indistinguishable particles independently execute the same continuous-time Markov chain with infinitesimal transition probabilitiesλij . Define a new Markov chain called the composition
=+Now choose p so that p2/q2 = β/α and prove that the equilibrium distribution is approximately normally distributed with mean and variance E(X∞) =n 8βα1 + 8βαVar(X∞) =n 8βα21 + 8βα2
=+(e) To handle the case α = β, we revert to the normal approximation to the binomial distribution. Argue thatn kpkqn−k = qnn kp qk≈ 1√2πnpq e− (k−np)2 2npq for p + q = 1. Show
=+(d) For the special case α = β, demonstrate thatπk =n k2 2n n.To do so first prove the identityn k=0n k2=2n n.
=+(c) Use Kolmogorov’s formula and calculate the equilibrium distributionπk = π0βαk n k2 for k between 0 and n.
=+(b) Show that the chain is irreducible and reversible.
=+What about the other rates?
=+212 8. Continuous-Time Markov Chains(a) Argue that the infinitesimal transition rates of the chain amount toλi,i−1 = i 2αλi,i+1 = (n − i)2β.
=+20. A chemical solution initially contains n/2 molecules of each of the four types A, B, C, and D. Here n is a positive even integer. Each pair of A and B molecules collides at rate α to produce
=+Why does this imply that equilibrium is reached shortly after the time ln n 2λ ?
=+19. Continuing Problem 18, it is possible to find a strong stationary time.For a single particle imagine a Poisson process with intensity 2λ. At each event of the process, flip a fair coin. If the
=+2 (1 + e−2λt). Now consider the continuous-time Markov chain for the number of molecules in the left half of the box.Given that the n molecules behave independently, prove that finitetime
=+18. Recall from Example 7.3.3 that Ehrenfest’s model of diffusion involves a box with n gas molecules. The box is divided in half by a rigid partition with a very small hole. Molecules drift
=+17. In Kimura’s model, suppose that two new species bifurcate at time 0 from an ancestral species and evolve independently thereafter. Show that the probability that the two species possess the
=+16. In our discussion of mean hitting times in Section 7.6, we derived the formula t = (I−Q)−11 for the vector of mean times spent in the transient states {1,...,m} en route to the absorbing
=+15. In the random walk of Problem 14, suppose that escape is possible from state 0 to state 1 with transition intensity α0 > 0. No other transitions out of state 0 are permitted. Let tl be the
=+210 8. Continuous-Time Markov Chains jump. Rewrite the equation in terms of the differences dk = hk+1−hk and solve in terms of d0 = h1. This gives hk = dk−1+···+d0 up to the unknown h1. The
=+14. Consider a random walk on the set {0, 1,...,n} with transition intensitiesλij =3 αi j = i + 1βi j = i − 1 0 otherwise for 1 ≤ i ≤ n − 1. Let hk be the probability that the process
=+13. Verify the Duhamel-Dyson identity et(A+B) = etA + t 0e(t−s)(A+B)BesAds for matrix exponentials. (Hint: Both sides satisfy the same differential equation.)
=+12. Prove that det(eA) = etr(A), where tr is the trace function. (Hint:Since the diagonalizable matrices are dense in the set of matrices[94], by continuity you may assume that A is diagonalizable.)
=+Show that AB = BA and that eAeB = ea+b 1 1 1 2 eA+B = ea+bcosh(1) 1 0 0 1 + sinh(1) 0 1 1 0 .Hence, eAeB = eA+B. (Hint: Use Problem 10 to calculate eA and eB.For eA+B write A + B = (a
=+11. Define matrices A = a 0 1 a, B = b 1 0 b.
=+10. Let A and B be the 2 × 2 real matrices A = a −b b a , B = λ 0 1 λ.Show that eA = ea cos b − sin b sin b cos b, eB = eλ 1 0 1 1 .8.9 Problems 209(Hints: Note that 2 × 2 matrices
=+9. Consider a square matrix M. Demonstrate that (a) e−M is the inverse of eM, (b) eM is positive definite when M is symmetric, and (c) eM is orthogonal when M is skew symmetric in the sense that
=+8. Show that eA+B = eAeB = eBeA when AB = BA. (Hint: Prove that all three functions et(A+B), etAetB, and etBetA satisfy the ordinary differential equation P(t)=(A + B)P(t) with initial condition
=+7. A village with n+ 1 people suffers an epidemic. Let Xt be the number of sick people at time t, and suppose that X0 = 1. If we model Xt as a continuous-time Markov chain, then a plausible model
=+6. Let Xt be a finite-state reversible Markov chain with equilibrium distribution π and infinitesimal generator Λ. Suppose {wi}i is an orthonormal basis of column eigenvectors of Λ in 2π and
=+5. Let P(t)=[pij (t)] be the finite-time transition matrix of a finite-state irreducible Markov chain. Show that pij (t) > 0 for all i, j, and t > 0.Thus, no state displays periodic behavior.
=+4. Consider a continuous-time Markov chain with infinitesimal generatorΛ and equilibrium distribution π. If the chain is at equilibrium at time 0, then show that it experiences ti πiλi
=+3. Suppose that Λ is the infinitesimal generator of a continuous-time finite-state Markov chain, and let μ ≥ maxi λi. If R = I +μ−1Λ, then prove that R has nonnegative entries and that
=+2. Let Λ = (λij ) be an m × m matrix and π = (πi) be a 1 × m row vector. Show that the equality πiλij = πjλji is true for all pairs(i, j) if and only if diag(π)Λ = Λt diag(π), where
=+8.9 Problems 207 The bivariate distribution of (X, Y ) possesses a density f(x, y) off the line y = x. Show that f(x, y) = * λ(μ + ν)e−λx−(μ+ν)y xy.Finally, demonstrate that Cov(X, Y ) =
=+1. Let U, V , and W be independent exponentially distributed random variables with intensities λ, μ, and ν, respectively. Consider the random variables X = min{U, W} and Y = min{V,W}.
=+47. A Sudoku puzzle is a 9 × 9 matrix, with some entries containing predefined digits. The goal is to completely fill in the matrix, using the digits 1 through 9, in such a way that each row,
=+7.9 Problems 185 graph by a list of nodes and a list of edges. Assign to each node a color represented by a number between 1 and 4. The cost of a coloring is the number of edges with incident nodes
=+46. It is known that every planar graph can be colored by four colors[32]. Design, program, and test a simulated annealing algorithm to find a four coloring of any planar graph. (Suggestions:
=+45. Find the row and column eigenvectors of the transition probability matrix P for the independence sampler. Show that they are orthogonal in the appropriate inner products.
=+44. In our analysis of convergence of the independence sampler, we asserted that the eigenvalues λ1,...,λm satisfied the properties: (a)λ1 = 1 − 1/w1, (b) the λi are decreasing, and (c) λm
=+43. Consider the Cartesian product space {0, 1}×{0, 1} equipped with the probability distribution(π00, π01, π10, π11) = 1 2, 1 4, 1 8, 1 8.Demonstrate that sequential Gibbs sampling does not
=+42. The Metropolis acceptance mechanism (7.19) ordinarily implies aperiodicity of the underlying Markov chain. Show that if the proposal distribution is symmetric and if some state i has a
=+41. An acceptance function a : (0, ∞) → [0, 1] satisfies the functional identity a(x) = xa(1/x). Prove that the detailed balance conditionπiqijaij = πj qjiaji holds if the acceptance
=+40. In the context of Section 7.6, one can consider leaving probabilities as well as hitting probabilities. Let lij be the probability of exiting the transient states from transient state j when
=+2 . Write a system of recurrence relations for the ek, and show that the system has the solution ek =* n k = 0 k(n − k) 1 ≤ k ≤ m.Note that the last recurrence in the system differs
=+39. Arrange n points labeled 0,...,n − 1 symmetrically on a circle, and imagine conducting a symmetric random walk with transition probabilities pij =* 1 2 j = i + 1 mod n or j = i − 1 mod n 0
=+37. Continuing Problem 36, let μk be the expected waiting time for a first passage from k to 0. Show that μk = kμ1 and thatμk = 1+1 2μk−1 +1 2μk+1 for k ≥ 2. Conclude from these
=+36. Consider the symmetric random walk Sn with Pr(Sn+1 = Sn + 1) = Pr(Sn+1 = Sn − 1) = 1 2.Given S0 = i = 0, let πi be the probability that the random walk eventually hits 0. Show thatπ1 = 1 2
=+35. Let Z0, Z1, Z2,... be a realization of a finite-state ergodic chain. If we sample every kth epoch, then show (a) that the sampled chain Z0, Zk, Z2k,... is ergodic, (b) that it possesses the
=+34. Prove inequality (7.15) by applying the Cauchy-Schwarz inequality.Also verify that P satisfies the self-adjointness conditionP u, vπ = u, P vπ, which yields a direct proof that P has only
=+Plot or tabulate the bound as a function of n for c = 52 cards. How many shuffles guarantee randomness with high probability?
=+state where all c permutations are equally likely. Let π be the uniform distribution and πXn be the distribution of the cards after n shuffles.The probability Pr(T ≤ n) is the same as the
=+2 . The order of the two subpiles is kept consistent with the order of the parent pile, and in preparation for the next shuffle, the top pile is placed above the bottom pile. To keep track of the
Showing 500 - 600
of 6259
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Last