All Matches
Solution Library
Expert Answer
Textbooks
Search Textbook questions, tutors and Books
Oops, something went wrong!
Change your search query and then try again
Toggle navigation
FREE Trial
S
Books
FREE
Tutors
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Hire a Tutor
AI Study Help
New
Search
Search
Sign In
Register
study help
business
theory of probability
Questions and Answers of
Theory Of Probability
=+z is the integer part of z and U1,...,Un are independent random variables with uniform distribution on [0, 1]. Use Problem 33 and the central limit theorem to show that (An−n/2)/
=+35. As a follow-up to Problems 10 and 11 of Chapter 5, let An be the number of ascents in a random permutation of {1,...,n}. Demonstrate that An has the same distribution as U1 + ··· + Un,
=+34. Let X1, X2,... and Y1, Y2,... be two independent i.i.d. sequences of Bernoulli random variables with success probability 1 2 . Show that the random variable Zn = 2Yn + Xn is uniformly
=+33. Consider two sequences Xn and Yn of random variables. Suppose that Xn converges in distribution to the random variable X and the difference Xn − Yn converges in probability to the constant 0.
=+32. Suppose Xn converges in probability to X and Yn converges in probability to Y . Show that Xn+Yn converges in probability to X+Y , that XnYn converges in probability to XY , and that Xn/Yn
=+31. Suppose a sequence of random variables Xn converges to X in probability. Use the Borel-Cantelli lemma to show that some subsequence Xnm converges to X almost surely. Now prove the full claim
=+Show numerically that the denominator has the real root r1 = 1.0873778 and two complex roots r2 and r3 satisfying |r2| > r1 and |r3| > r1.Deduce the asymptotic relation qn cr−n−1 1 for the
=+for n ≥ 3 and use it to calculate the generating function Q(s) = ∞n=0 qnsn = 2s2 + 4s + 8 8 − 4s − 2s2 − s3 .
=+30. Let qn be the probability that in n tosses of a fair coin there are no occurrences of the pattern HHH [59]. Derive the recurrence relation qn = 1 2qn−1 +1 4qn−2 +1 8qn−3
=+29. The function f(x) = ex 1 − x = ∞n=0 anxn has a pole (singularity) at x = 1. Show that an = e + O(rn) for every r > 0 based on this fact. (Hint: Consider f(x) − e/(1 − x).)
=+12.4.1, expand n k=m f(k) so that the remainder involves an integral of f(x).)
=+Use the second of these inequalities to prove that Euler’s constantγ = limn→∞(n k=1 1k −ln n) exists. (Hint: In the spirit of Proposition
=+28. Suppose f(x) is continuously differentiable and monotone on the interval [m, n]. Prove thatn k=m f(k) − n mf(x) dx − 1 2[f(m) + f(n)]≤1 2|f(n) − f(m)|12.7 Problems 323 and thatn k=m
=+27. Find asymptotic expansions for the two sums n k=1(n2 + k2)−1and n k=1(−1)k/k valid to O(n−3).
=+26. Verify the asymptotic expansionn k=1 kα = Cα +nα+1α + 1 +nα2 +m j=1 B2j 2j α2j − 1nα−2j+1+O(nα−2m−1)for a real number α = −1 and some constant Cα, which you need not
=+25. Verify the identities t coth t = 2t e2t − 1 + t = ∞n=0 4nB2n(2n)! t 2n using equation (12.16). Show that this in turn implies t cott = ∞n=0(−1)n4nB2n(2n)! t 2n.The Bernoulli numbers
=+Check the defining conditions (A.10) of the Bernoulli polynomials by evaluating limt→0 f(t, x), 1 0 f(t, x) dx, and ∂∂x f(t, x). Hence, the coefficients Bn(x) are the Bernoulli polynomials.
=+24. As an alternative definition of the Bernoulli polynomials, consider the bivariate exponential generating function f(t, x) = tetx et − 1 = ∞n=0 Bn(x)n! t n.
=+23. Demonstrate that the Bernoulli polynomials satisfy the identity Bn(x + 1) − Bn(x) = nxn−1.Use this result to verify that the sum of the nth powers of the first m integers can be expressed
=+22. Continuing Problem 21, show inductively for n ≥ 1 that B2n(x) has exactly one simple zero in (0, 1/2) and one in (1/2, 1), while B2n+1(x)has precisely the simple zeros 0, 1/2, and 1.
=+21. Show that the Bernoulli polynomials satisfy the identity Bn(x)=(−1)nBn(1 − x)for all n and x ∈ [0, 1]. Conclude from this identity that Bn(1/2) = 0 for n odd.
=+Apply Stirling’s formula, and deduce the asymptotic relation|B2n| 4√πn nπe 2n.
=+20. Show that the even Bernoulli numbers can be expressed as B2n = (−1)n+1 2(2n)!(2π)2n1 +1 22n +1 32n +1 42n + ··· .
=+19. Demonstrate thatπ2 12 = ∞k=1(−1)k+1 k2 , π4 90 = ∞k=1 1k4 .
=+18. Suppose that the periodic function f(x) is square-integrable on [0, 1].Prove the assertions: (a) f(x) is an even (respectively odd) function if and only if its Fourier coefficients cn are even
=+17. Let f(x) be a periodic function on the real line whose kth derivative is piecewise continuous for some positive integer k. Show that the Fourier coefficients cn of f(x) satisfy|cn| ≤ 1 0
=+16. In the socks in the laundry problem, demonstrate that E(N1) = (2nn!)2(2n)! .Conclude from this and Stirling’s formula that E(N1) √πn. (Hint:Change variables in the first integral of
=+15. Prove the elementary inequalities ln n! − ln n 2 ≤ n 1ln t dt = n ln n − n + 1 ≤ ln n!that point the way to Stirling’s formula. (Hint: Using the concavity of ln t, verify the
=+14. Let φ(x) and Φ(x) be the standard normal density and distribution functions. Demonstrate the bounds x1 + x2 φ(x) ≤ 1 − Φ(x) ≤1 xφ(x)for x > 0. (Hints: Exploit the derivatives
=+13. Suppose the continuous function f(t) on [0, ∞) is O(ect) for some c ≥ 0 as t → ∞. For t positive use Laplace’s method to prove Post’s inversion formula [161]f(t) = limk→∞(−1)k
=+12. The von Mises density eκ cos(y−α)2πI0(κ) , −π 0 is a concentration parameter, and the modified Bessel function I0(κ) is the normalizing constant I0(κ) = 1 2π π−πeκ cos
=+11. Demonstrate the asymptotic equivalencen k=0n kk!n−k 9πn 2as n → ∞. (Hint: See Problem 10.)
=+10. For 0
=+9. Prove that π2 0e−x tan ydy 1 x π2− π2(y + 2)e−x cos ydy 4 xas x → ∞.
=+12.7 Problems 319 and use integration by parts and the dominated convergence theorem.)
=+8. Show that ∞0 e−y 1+xy dy ln x x as x → ∞. (Hints: Write ∞0 e−y 1 + xy dy = 1 x ∞0 ddy ln(1 + xy)e−ydy,
=+Argue, therefore, that the remainders of the expansion alternate in sign and are bounded in absolute value by the first omitted term.
=+dF(y). Show that ∞k=0(−1)kmkxk is an asymptotic expansion for f(x) satisfying f(x) −n k=0(−1)kmkxk = (−x)n+1 ∞0 yn+1 1 + xy dF(y).
=+7. Let F(x) be a distribution function concentrated on [0, ∞) with moments mk = ∞0 ykdF(y). For x ≥ 0 define the Stieltjes function f(x) = ∞0 11+xy
=+6. Suppose that 0
=+5. Find an asymptotic expansion for ∞x e−y4 dy as x → ∞.
=+4. Demonstrate that1 +1√x x e√x−1/2 as x → ∞.
=+3. For two positive functions f(x) and g(x), prove that f(x) g(x)as x → x0 implies ln f(x) = ln g(x) + o(1) as x → x0. Hence, limx→x0 ln f(x) = 0 entails ln f(x) ln g(x) as x → x0.
=+2. Show that f(x) g(x) as x → x0 does not entail the stronger relation ef(x) eg(x) as x → x0. However, argue that the condition f(x) = g(x) + o(1) is sufficient to imply ef(x) eg(x).
=+1. Prove the following order relations:a) 1 − cos2 x = O(x2) as x → 0b) ln x = o(xα) as x → ∞ for any α > 0c) x2 1+x3 + ln(1 + x2) = O(x2) as x → 0d) x2 1+x3 + ln(1 + x2) = O(ln x) as x
=+22. Consider the Wright-Fisher model with no selection but with mutation from allele A1 to allele A2 at rate η1 and from A2 to A1 at rateη2. With constant population size N, prove that the
=+21. Use Stirling’s formula to demonstrate thatΓ(2Nη + 1 2 )2N(1 − f)Γ(2Nη) ≈9 η1 − f when N is large in the Wright-Fisher model for a recessive disease.
=+20. In Problem 13 suppose ν > 0 and α 0 and x > 0.
=+19. Consider a diffusion process Xt with infinitesimal meanμ(t, x) = 3 1, x< 0 0, x = 0−1, x> 0 and infinitesimal variance 1. Find the equilibrium distribution f(x)of Xt.11.9 Problems 295
=+18. Show that formula (11.24) holds in R2.
=+17. In Problem 16 find w(x) and E(T ) when c is finite. The value α < 0 is allowed.
=+Simplify w(x) when α = 0, and show by differentiation of w(x) with respect to θ that the expected time E(T ) to reach the barrier d is infinite. When α < 0, show that Pr(T < ∞) = e 2ασ2
=+16. Suppose the transformed Brownian motion with infinitesimal mean αand infinitesimal variance σ2 described in Example 11.3.3 has α ≥ 0.If c = −∞ and d < ∞, then demonstrate that
=+15. Consider the transformed Brownian motion with infinitesimal meanα and infinitesimal variance σ2 described in Example 11.3.3. If the process starts at x ∈ [c, d], then prove that it reaches
=+14. In Problem 13 suppose ν = 0. Verify that the process goes extinct with probability min{1, e−2 α−δα+δ x0 } by using equation (11.20)and sending c to 0 and d to ∞.
=+13. In the diffusion approximation to a branching process with immigration, we set μ(t, x)=(α − δ)x + ν and σ2(t, x)=(α + δ)x + ν, where α and δ are the birth and death rates per
=+12. Show that Cov(Yt+s, Yt) = σ2e−γs(1 − e−2γt)2γin the Ornstein-Uhlenbeck process when s and t are nonnegative.
=+Apply this result to the situation where Yt equals y0 at t = 0 and has μY (t, y) = 0 and σ2 Y (t, y) = σ2(t). Show that Yt is normally distributed with mean and variance E(Yt) = y0 Var(Yt) = t
=+11. Consider a diffusion process Xt with infinitesimal mean μ(t, x) and infinitesimal variance σ2(t, x). If the function f(t) is strictly increasing and continuously differentiable, then argue
=+10. Continuing Problem 8, let Yt = σXt + αt. Prove that Yt − αt and(Yt − αt)2 − σ2t are martingales. Let Tb be the first passage time to level b > 0 for α > 0. Assuming that the
=+9. Continuing Problem 8, let T be the first time at which Xt attains the value −a < 0 or the value b > 0. Assuming the optional stopping theorem holds for the stopping time T and the martingales
=+8. For standard Brownian motion Xt, show that the stochastic processes Yt = Xt and Yt = X2 t − t enjoy the martingale property E(Yt+s| Xr, r ∈ [0, t]) = Yt for s > 0. (Hint: Xt+s − Xt is
=+7. Let Tb be the first passage time to level b > 0 for standard Brownian motion. Verify that Tb and b2T1 have the same distribution.
=+6. For standard Brownian motion Xt, it makes sense to define the integral Yt = t 0 Xsds because sample paths are continuous functions of the time parameter. Argue that the stochastic process Yt
=+5. For standard Brownian motion Xt, prove that the stochastic process Yt =%tX1/t t > 0 0 t = 0 also furnishes a version of standard Brownian motion. (Hint: Demonstrate that Yt satisfies either set
=+4. Standard Brownian motion Xt can be characterized by four postulates: (a) E(Xt) = 0, (b) Var(Xt) = t, (c) Cov(Xs, Xt) = min{s, t}, and (d) the random vector (Xt1 ,...,Xtn ) is multivariate
=+3. Let Xt be standard Brownian motion. Calculate the mean and variance functions of the stochastic processes |Xt| and eXt .
=+2. Demonstrate that standard Brownian motion is a Markov process. It suffices to check that Pr(Xtn ≤ un | Xt1 = u1,...,Xtn−1 = un−1)= Pr(Xtn ≤ un | Xtn−1 = un−1)for any two sequences 0
=+1. Assuming that the increment Xt+s − Xt is normally distributed with mean and variance given by equations (11.1) and (11.2), check the approximation (11.3) by taking conditional expectations in
=+25. Example 10.5.3 relies on some unsubstantiated claims. Prove that:(a) (1 − x)k ≤ e−kx for x ∈ (0, 1) and k > 0, (b) |Dn − Xn−1| ≤ 2√2, and (c) 1 + ··· + 1 n−1 ≤ ln n + 1
=+24. Consider a multinomial experiment with n trials, m possible cells, and success probability pi for cell i. Let Sk be the number of cells with exactly k successes. Show that E(Sk) = m i=1n kpk
=+23. Consider a random graph with n nodes. Between every pair of nodes, independently introduce an edge with probability p. The graph is said to be k colorable if it is possible to assign each of
=+22. Suppose that v1,...,vn ∈ Rm have Euclidean norms vi2 ≤ 1. Let Y1,...,Yn be independent random variables uniformly distributed on the two-point set {−1, 1}. If Z = Y1v1 + ··· +
=+21. Let Y1,...,Yn be independent Bernoulli random variables with success probability μ. Graphically compare the large deviation bound(10.23) to Chebyshev’s bound Pr(|Sn − nμ| ≥ λ) ≤
=+9, show that E(Tn) = nλVar(Tn) = nλ2 Ee−βTn= λλ + βn for β > 0. These results agree with our earlier findings concerning the mean, variance, and Laplace transform of Tn. (Hints: Use NTn
=+20. Continuing Problem 9, let Tn be the time at which Nt first equals the positive integer n. Assuming the optional stopping theorem holds for the stopping time Tn and the martingales identified in
=+19. Continuing Problem 18, let T be the time of absorption at 0 or 1 starting from Y0 = i copies of the a1 allele. Demonstrate that Pr(T >n) ≤ i(2m − i)1 − 1 2m n≤for ∈ (0, 1) and n = 2m
=+18. In the Wright-Fisher model of Example 10.2.6, show that Zn = Xn(1 − Xn)1 − 1 2m nis a martingale. Assuming that limn→∞ Zn = Z∞ exists, we have Xn(1 − Xn) ≈1 − 1 2m nZ∞ for n
=+17. Continuing Problem 16, assume that the random walk is asymmetric and moves to the right with probability p and to the left with probability q = 1 − p. Show that the stopping time T has mean
=+16. Let Sn = X1 + ···+ Xn be a symmetric random walk on the integers{−a, . . . , b} starting at S0 = 0. For the stopping time T = min{n : Sn = −a or Sn = b}, 10.6 Problems 267 prove that
=+15. Let Y1, Y2,... be a sequence of independent random variables. The tailσ-algebra T generated by the sequence can be expressed as T = ∩nTn, where Tn is the σ-algebra generated by Yn,
=+14. In Proposition 10.3.2, prove that Xn = E(X∞ | Fn). If Xn is defined by Xn = E(X | Fn) to begin with, then one can also show that X∞ = E(X | F∞), where F∞ is the smallest σ-algebra
=+13. Given X0 = μ ∈ (0, 1), define Xn inductively by Xn+1 =* α + βXn, with probability XnβXn, with probability 1 − Xn , where α, β > 0 and α+β = 1. Prove that Xn is a martingale with
=+12. In Example 10.2.2, suppose that each Yn is equally likely to assume the values 1 2 and 3 2 . Show that ∞i=1 Yi ≡ 0 but ∞i=1 E(Yi) = 1 [24].(Hint: Apply the strong law of large numbers
=+11. In Example 10.3.2, show that the fractional linear transformation L∞(t) = pt − p + q qt − p + q solves equation (10.12) when Q(s) = p 1−qs and μ = q p . Also verify equation (10.13).
=+10. In Example 10.3.2, show that Var(X∞) = σ2μ(μ−1) by differentiating equation (10.12) twice. This result is consistent with the mean square convergence displayed in equation (10.9).
=+9. Let Nt denote the number of random points that occur by time t in a Poisson process on [0, ∞) with intensity λ. Show that the following stochastic processes Xt = Nt − λt Xt = (Nt − λt)2
=+8. Suppose Yn is the number of particles at the nth generation of a branching process. If s∞ is the extinction probability, prove that Xn = sYn∞ is a martingale. (Hint: If Q(s) is the progeny
=+7. Let {Xn}n≥0 be a family of random variables with finite expectations that satisfy E(Xn+1 | X1,...,Xn) = αXn + (1 − α)Xn−1 for n ≥ 1 and some constant α = 1. Find a second constant β
=+6. Suppose Yt is a continuous-time Markov chain with infinitesimal generator Ω. Let v be a column eigenvector of Ω with eigenvalue λ. Show that Xt = e−λtvYt is a martingale in the sense that
=+5. Let Yn be a finite-state, discrete-time Markov chain with transition matrix P = (pij ). If v is a column eigenvector for P with nonzero eigenvalue λ, then verify that Xn = λ−nvYn is a
=+4. Let Y1, Y2,... be a sequence of i.i.d. random variables with common moment generating function M(t) = E(etY1 ). Prove that Xn = M(t)−net(Y1+···+Yn)is a martingale whenever M(t) < ∞.
=+3. Let Y1, Y2,... be a sequence of independent random variables with zero means and common variance σ2. If Xn = Y1 + ··· + Yn, then show that X2 n − nσ2 is a martingale.
=+2. An urn contains b black balls and w white balls. Each time we randomly withdraw a ball, we replace it by c + 1 balls of the same color.Let Xn be the fraction of white balls after n draws.
=+1. Define the random variables Yn inductively by taking Y0 = 1 and Yn+1 to be uniformly distributed on the interval (0, Yn). Show that the sequence Xn = 2nYn is a martingale.
=+9.9 Problems 245 A cell is labeled by the number of mutations it carries. Assume a type k cell dies at rate δk, splits into two daughter cells of type k at rate βk, and splits into one daughter
=+26. Although null and deleterious mutations commonly occur in human cells, cancer initiation and progression is driven by mutations that increase cell fitness. These fitness advantages take the
=+(e) Find the value of t such that P1[t,(1, 0)] = 1/2. Note that this time is relatively short. Thus, therapy should be as prompt and as radical as possible.
=+(c) Derive the solution to the Riccati equation (9.22) by writing a linear differential equation for h(t)=1/f(t).(d) Prove that the probability of no type 2 particles at time t is P1[t,(1, 0)] = 1
=+(b) Subject to the initial conditions P1(0, z) = z1 and P2(0, z) = z2, demonstrate that these equations have solutions P1(t, z) = z1e−λt(e−λtz2 + 1 − z2)−α1 + z1[(e−λtz2 + 1 −
Showing 400 - 500
of 6259
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Last