All Matches
Solution Library
Expert Answer
Textbooks
Search Textbook questions, tutors and Books
Oops, something went wrong!
Change your search query and then try again
Toggle navigation
FREE Trial
S
Books
FREE
Tutors
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Hire a Tutor
AI Study Help
New
Search
Search
Sign In
Register
study help
business
statistical techniques in business
Questions and Answers of
Statistical Techniques in Business
In Example 12.4.5, show the convergence (12.60).Section 12.5
Consider the setup of Example 12.4.3.(i) Find the joint limiting distribution of n−1 i=1 (Yi,1, Yi,0), suitably normalized.(ii) Let Rˆn = n−1 i=1 Yi,1/n−1 i=1 Yi,0, which is the proportion of
Generalize Theorem 12.4.1 to the case where the Xi are vectorvalued.
Suppose X is a stationary process with mean μ and covariance function R(k). Assume R(k) → 0 as k → ∞. Show X¯ n P→ μ. (A sufficient condition for R(k) → 0 is X is strongly mixing with
Assume X is stationary, E(|X1|2+δ) < ∞ for some δ > 0 and(12.56) holds. Show that (12.55) holds, and hence R(k) → 0 as k → ∞.
Verify (12.54).
In Example 12.4.2 with the j having finite variance, derive the formulae for the mean and covariance (12.52) of the process.
Verify (12.48).
Consider a U-statistic of degree 2, based on a kernel h. Let h1(x) =E[h(x, X2)] and ζ1 = V ar[h1(X1)]. Assume ζ1 > 0, so that we know that √n[Un −θ(P)] converges in distribution to the normal
Let X1,..., Xn be i.i.d. P. Consider estimating θ(P) defined byθ(P) = E[h(X1,..., Xb)] , where h is a symmetric kernel. Assume P is such that E[h2(X1,..., Xb)] < ∞, so that θ(P) is also
Let X1,..., Xn be i.i.d. P. Consider estimating θ(P) defined byθ(P) = E[h(X1,..., Xb)] , where h is a symmetric kernel. Assume P is such that E|h(X1,..., Xb)| < ∞, so thatθ(P) is also
Consider testing the null hypothesis that a sample X1,..., Xn is i.i.d. against the alternative that the distributions of the Xi are stochastically increasing. Mann (1945) proposed the test which
In Example 12.3.7, find F and G so that ζ0,1 and ζ1,0 are not 1/12, even when P{X ≤ Y } = 1/2. Explore how large the rejection probability of the test with rejection region (12.46) can be under H
Show that Wn in Example 12.2.1 and Um,n in Example 12.3.7 are related by Wn = mnUm,n + n(n + 1)/2, at least in the case of no ties in the data.
Verify (12.45).
Show that (12.44) holds if Uˆn is replaced by Un.
Show (12.43)
Show (12.42).
In Example 12.3.6, show (12.37). Verify the limiting distribution of Vn in (12.38).
Verify (12.35).
Suppose (X1, Y1), . . . , (Xn, Yn) are i.i.d. P, with E(X2 i ) < ∞ and E(Y 2 i ) < ∞. The parameter of interest is θ(P) = Cov(Xi, Yi). Find a kernel for which the corresponding U-statistic Un is
Verify (12.27).
Prove a Glivenko–Cantelli Theorem (Theorem 11.4.2) for sampling without replacement from a finite population. Specifically, assume X1,..., Xn are sampled at random without replacement from the
Prove an analogous result to Theorem 12.2.5 when sampling from an infinite population, where the asymptotic variance has the same form as (12.21)with f = 0. Assuming s2(1) and s2(0) are known, how
Provide the details to show (12.22). Hint: Use Theorem 12.2.4 and Problem 12.12.
Consider the estimatorsˆ2 N (1) defined in (12.20). Show thatsˆ2 N (1) P→s2(1). State your assumptions.
The limiting expression for NV ar(θˆN ) is given in (12.19). Find an exact expression for NV ar(ˆθN ) that has a similar representation.
In the setting of Corollary 12.2.1(ii), find an exact formula for Cov(U¯n, V¯n) and then calculate the limit of nCov(U¯n, V¯n).
Complete the proof of Corollary 12.2.1(ii) using the Cramér-Wold Device.
In the setting of Section 12.2, assume N = m + n and(xN,1,..., xN,N ) = (y1,..., ym,z1,...,zn) .Let y¯m = m i=1 yi /m and z¯n = n j=1 zj /n. Let x¯N = N j=1 xN,j /N. Also lets2 m,y = m i=1(yi −
In Example 12.2.1, rather than considering the sum of the ranks of the Yis, consider the statistic given by the sum of the squared ranks of the Yis. Find its limiting distribution, properly
In the context of Example 12.2.1, find the limiting distribution of the Wn using Theorem 12.2.3. Identify Gn and G.
Show that τN defined in the proof of Theorem 12.2.3 satisfies τN →∞ as min(n, N − n) → ∞.
Show why Theorem 12.2.1 is a special case of Theorem 12.2.2.
Use Theorem 12.2.1 to prove an asymptotic normal approximation to the hypergeometric distribution.
Show (12.6) and (12.7).
Show (12.2) and (12.3).
(i) Suppose Xn d→ X and V ar(Xn) → V ar(X) < ∞. Show E(Xn) → E(X).(ii) Suppose (Xn, Yn) d→ (X, Y ) in the plane, with V ar(Xn) → V ar(X) < ∞ and V ar(Yn) → V ar(Y ) < ∞. Show that
Assume X1,..., Xn are i.i.d. with E(|Xi|p] < ∞. Then, show that n− 1 p max 1≤i≤n|Xi| P→ 0 .
If Xn d→ X and {Xn} is asymptotically uniformly integrable, show that for any 0 < p < 1, E(X p n ) → E(X p)
(i) Show that {Xn} is uniformly integrable if and only if supn E|Xn| < ∞ and sup nE[|Xn|IA} = A|Xn(ω)|d P(ω) → 0 as P{A} → 0.(ii) Suppose X1,..., Xn are i.i.d. with finite mean μ. Show that
If Xn P→ 0 and sup nE[|Xn|1+δ] < ∞ for some δ > 0 , (11.42)then show E[|Xn|] → 0. (More generally, if the Xn are uniformly integrable in the sense supn E[|Xn|I{|Xn| > t}] → 0 as t → ∞,
(i) Show that if {Xn} is uniformly integrable, then {Xn} is asymptotically uniformly integrable, but the converse is false.(ii) Show that a sufficient condition for {Xn} to be uniformly integrable
(i) Suppose random variables Xn, Yn and a random vector Wn are such that, given Wn, Xn and Yn are conditionally independent. Assume, for nonnegative constants σX and σY , and for all z, P{Xn ≤
Show that Xn → X in probability is equivalent to the statement that, for any subsequence Xn j , there exists a further subsequence Xn j k such that Xn j k → X with probability one.
(i) If X1,..., Xn are i.i.d. with c.d.f. F and empirical distribution Fˆn, use Theorem 11.4.3 to show that n1/2 sup |Fˆn(t) − F(t)| is a tight sequence.(ii) Let Fn be any sequence of
Show how Theorem 11.4.3 implies Theorem 11.4.2. Hint: Use the Borel–Cantelli Lemma; see Billingsley (1995, Theorem 4.3).
Consider the uniform confidence band Rn,1−α for F given by(11.36). Let F be the set of all distributions on IR. Show, inf F∈F PF {F ∈ Rn,1−α} ≥ 1 − α .
Let U1,..., Un be i.i.d. with c.d.f. G(u) = u and let Gˆ n denote the empirical c.d.f. of U1,..., Un. Define Bn(u) = n1/2[Gˆ n(u) − u] .(Note that Bn(·) is a random function, called the uniform
Assume Xn has c.d.f. Fn. Fix α ∈ (0, 1).(i) If Xn is tight, show that F−1 n (1 − α) is uniformly bounded.(ii) If Xn P→c, show that F−1 n (1 − α) → c.
For a c.d.f. F, define the quantile transformation Q by Q(u) = inf{t : F(t) ≥ u} .(i) Show the event {F(t) ≥ u} is the same as {Q(u) ≤ t}.(ii) If U is uniformly distributed on (0, 1), show the
Suppose Xn is a sequence of real-valued random variables.(i) Assume Xn is Cauchy in probability; that is, for all > 0, lim min(m,n)→∞ P{|Xn − Xm| > } → 0 .Then, show there exists a random
Suppose Xn is a tight sequence and Yn P→ 0. Show that XnYn P→ 0.If it is assumed Yn → 0 almost surely, can you conclude XnYn → 0 almost surely?
Let X1,..., Xn be i.i.d. P on S. Suppose S is countable and let E be the collection of all subsets of S. Let Pˆn be the empirical measure; that is, for any subset E of E, Pˆn(E) is the proportion
Prove the Glivenko–Cantelli Theorem. Hint: Use the Strong Law of Large Numbers and the monotonicity of F.
Suppose X1,..., XI are independent and binomially distributed, with Xi ∼ b(ni, pi); that is, Xi is the number of successes in ni Bernoulli trials.Suppose that pi satisfies log[pi/(1 − pi)] = θdi
Assume X1,..., Xn are i.i.d. N(0, σ2). Let σˆ 2 n be the maximum likelihood estimator of σ2 given by σˆ 2 n = n i=1 X2 i /n.(i) Find the limiting distribution of √n(σˆ n − σ).(ii) For a
Let Xi,j , 1 ≤ i ≤ I, 1 ≤ j ≤ n be independent with Xi,j Poisson with mean λi . The problem is to test the null hypothesis that the λi are all the same versus they are not all the same.
Let X1,..., Xn be a random sample from the Poisson distribution with unknown mean λ. The uniformly minimum variance unbiased estimator(UMVUE) of exp(−λ) is known to be [(n − 1)/n]Tn , where Tn
Let X1, ··· , Xn be i.i.d. Poisson with mean λ. Consider estimating g(λ) = e−λ by the estimator Tn = e−X¯ n . Find an approximation to the bias of Tn;specifically, find a function b(λ)
to suggest a test.
Suppose Xi,j are independently distributed as N(μi, σ2 i ); i =1,...,s; j = 1,..., ni . Let S2 n,i =j(Xi,j − X¯i)2, where X¯i = n−1 ij Xi,j . Let Zn,i = log[S2 n,i /(ni − 1)]. Show that, as
Suppose (X1,..., Xk ) is multinomial based on n trials and cell probabilities (p1,..., pk ). Show that√n⎡⎣k j=1 X j n log X j n− c⎤⎦converges in distribution to F, for some constant c
(i) If X1,..., Xn is a sample from a Poisson distribution with mean E(Xi) = λ, then √n(√X¯ − √λ) tends in law to N(0, 1 4 ) as n → ∞.(ii) If X has the binomial distribution b(p, n),
Consider the setting of Problem 6.21, where (Xi, Yi) are independent N(μi, σ2) for i = 1,..., n. The parameters μ1,..., μn and σ2 are all unknown.For testing σ = 1 against σ > 1, determine the
Assume (Ui, Vi) is bivariate normal with correlation ρ. Let ρˆn denote the sample correlation given by (11.29). Verify the limit result (11.31).
to prove (11.28).
Use
Suppose R is a real-valued function on IRk with R(y) = o(|y|p) as|y| → 0, for some p > 0. If Yn is a sequence of random vectors satisfying |Yn| =oP (1), then show R(Yn) = oP (|Yn|p). Hint: Let g(y)
Prove part (ii) of Theorem 11.3.4.
Let X1, ··· , Xn be i.i.d. normal with mean θ and variance 1. Suppose ˆθn is a location equivariant sequence of estimators such that, for every fixedθ, n1/2(θˆn − θ) converges in
Suppose Xn d→ N(μ, σ2). (i). Show that, for any sequence of numbers cn, P(Xn = cn) → 0. (ii). If cn is any sequence such that P(Xn > cn) → α, then cn → μ + σz1−α, where z1−α is the
Suppose Pn is a sequence of probabilities and Xn is a sequence of real-valued random variables; the distribution of Xn under Pn is denoted L(Xn|Pn).Prove that L(Xn|Pn) is tight if and only if Xn/an
Show that tightness of a sequence of random vectors in IRk is equivalent to each of the component variables being tight.
Prove Lemma 11.3.1
Show how the interval (11.25) is obtained from (11.24).
In Example 11.3.4, let In be the interval (11.23). Show that, for any n, inf p Pp{p ∈ ˆIn} = 0 .Hint: Consider p positive but small enough so that the chance that a sample of size n results in 0
In Example 11.3.2, show that βn(pn) → 1 if n1/2(pn − 1/2) → ∞and βn(pn) → α if n1/2(pn − 1/2) → 0.
(i) Prove Corollary 11.3.1.(ii) Suppose Xn d→ X and Cn P→ ∞. Show P{Xn ≤ Cn} → 1.
If Xn is a sequence of real-valued random variables, prove that Xn → 0 in Pn-probability if and only if EPn [min(|Xn|, 1)] → 0.
As in Example 11.3.1, consider the problem of testing P = P0 versus P = P1 based on n i.i.d. observations. The problem is an alternative way to show that a most powerful level α (0 < α < 1) test
(i) Let K(P0, P1) be the Kullback–Leibler Information, defined in(11.21). Show that K(P0, P1) ≥ 0 with equality iff P0 = P1.(ii) Show the convergence (11.20) holds even when K(P0, P1) = ∞.
Suppose X1,..., Xn are i.i.d. real-valued random variables. Write Xi = X+i − X−i , where X+i = max(Xi, 0). Suppose X−i has a finite mean, but X+i does not. Let X¯ n be the sample mean. Show
Generalize Slutsky’s Theorem (Theorem 11.3.2) to the case where Xn is a vector, An is a matrix, and Bn is a vector.
Assume Xn d→ X and Yn P→c, where c is a constant. Show that(Xn, Yn) d→ (X, c).
Suppose Xn is a sequence of random vectors.(i) Show Xn P→ 0 if and only if |Xn| P→ 0 (where the first zero refers to the zero vector and the second to the real number zero).(ii) Show that
Suppose Xn and X are real-valued random variables (defined on a common probability space). Prove that, if Xn converges to X in probability, then Xn converges in distribution to X. Show by
is a generalization of Lemma 11.2.1.
if {Fˆn} is a random sequence, similar to how
Prove a result analogous to
Prove the following generalization of Lemma 11.2.1. Suppose {Fˆn}is a sequence of random distribution functions satisfying Fˆn(x) P→ F(x) at all x which are continuity points of a fixed
Give an example of an i.i.d. sequence of real-valued random variables such that the sample mean converges in probability to a finite constant, yet the mean of the sequence does not exist.
(Chebyshev’s Inequality) (i) Show that, for any real-valued random variable X and any constants a > 0 and c, E(X − c)2 ≥ a2P{|X − c| ≥ a} .(ii) Hence, if Xn is any sequence of random
(Markov’s Inequality) Let X be a real-valued random variable with X ≥ 0. Show that, for any t > 0, P{X ≥ t} ≤E[X I{X ≥ t}]t≤E(X)t;here I(X ≥ t) is the indicator variable that is 1 if X
(i) Construct a sequence of distribution functions {Fn} on the real line such that Fn d→ F, but the convergence F−1 n (1 − α) → F−1(1 − α) fails, even if F is assumed continuous. (ii)
For a c.d.f. F with quantile function defined by F−1(u) = inf{x : F(x) ≥ u} , show that: (i) F(x) ≥ u is equivalent to F−1(u) ≤ x.(ii) F−1(·) is nondecreasing and left continuous with
Suppose F and G are two probability distributions on IRk . Let L be the set of (measurable) functions f from IRk to IR satisfying | f (x) − f (y)|≤|x − y|and supx | f (x)| ≤ 1, where |·| is
Let Fn and F be c.d.f.s on IR. Show that weak convergence of Fn to F is equivalent to ρL (Fn, F) → 0, where ρL is the Lévy metric.
For cumulative distribution functions F and G on the real line, define the Kolmogorov–Smirnov distance between F and G to be dK (F, G) = sup x|F(x) − G(x)| .Show that dK (F, G) defines a metric
Show that ρL (F, G) defined in Definition 11.2.3 is a metric; that is, show ρL (F, G) = ρL (G, F), ρL (F, G) = 0 if and only if F = G, andρL (F, G) ≤ ρL (F, H) + ρL (H, G) .
Showing 200 - 300
of 5757
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Last