All Matches
Solution Library
Expert Answer
Textbooks
Search Textbook questions, tutors and Books
Oops, something went wrong!
Change your search query and then try again
Toggle navigation
FREE Trial
S
Books
FREE
Tutors
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Hire a Tutor
AI Study Help
New
Search
Search
Sign In
Register
study help
business
statistical techniques in business
Questions and Answers of
Statistical Techniques in Business
Prove Polyá’s Theorem 11.2.9. Hint: First consider the case of distributions on the real line.
Suppose X1,..., Xn are i.i.d. real-valued random variables with c.d.f. F. Assume ∃θ1 < θ2 such that F(θ1) = 1/4, F(θ2) = 3/4, and F is differentiable, with density f taking positive values at
Let X1,..., Xn be i.i.d. normal with mean θ and variance 1. Let X¯ n be the usual sample mean and let X˜ n be the sample median. Let pn be the probability that X¯ n is closer to θ than X˜ n is.
Generalize Theorem 11.2.8 to the case of the pth sample quantile.
Complete the proof of Theorem 11.2.8 by considering n even.
Let X1,..., Xn be i.i.d. with density p0 or p1, and consider testing the null hypothesis H that p0 is true. The MP level-α test rejects when n i=1r(Xi) ≥Cn, where r(Xi) = pi(Xi)/p0(Xi), or
Suppose Xn,1,..., Xn,n are i.i.d. Bernoulli trials with success probability pn. If pn → p ∈ (0, 1), show that n1/2[X¯ n − pn] d→ N(0, p(1 − p)) .Is the result true even if p is 0 or 1?
Suppose Xk is a noncentral Chi-squared variable with k degrees of freedom and noncentrality parameter δ2 k .(i) Show that (Xk − k)/(2k)1/2 d→ N(μ, 1) if δ2 k /(2k)1/2 → μ as k → ∞.(ii)
In Example 11.2.2, show that Lyapounov’s Condition holds.
Show that Lyapounov’s Central Limit Theorem (Corollary 11.2.1)follows from the Lindeberg Central Limit Theorem (Theorem 11.2.5).
Show that Theorem 11.2.3 follows from Theorem 11.2.2.
Let Xn have characteristic function ζn. Find a counterexample to show that it is not enough to assume ζn(t) converges (pointwise in t) to a functionζ(t) in order to conclude that Xn converges in
Verify (11.9).
Show that the characteristic function of a sum of independent realvalued random variables is the product of the individual characteristic functions.(The converse is false; counterexamples are given
Suppose Xn d→ X. Show that E f (Xn) need not converge to E f (X)if f is unbounded and continuous, or if f is bounded but discontinuous.
Prove the equivalence of (i) and (vi) in the Portmanteau Theorem(Theorem 11.2.1).
Show that x = (x1,..., xk ) is a continuity point of the distribution FX of X if the boundary of the set of (y1,..., yk ) such that yi ≤ xi for all i has probability 0 under the distribution of X.
Let X be N(0, 1) and Y = X. Determine the set of continuity points of the bivariate distribution of (X, Y ).
For a univariate c.d.f. F, show that the set of points of discontinuity is countable.
For each θ ∈ , let fn(θ) be a real-valued sequence. We say fn(θ)converges uniformly (in θ) to f (θ) if supθ∈| fn(θ) − f (θ)| → 0 as n → ∞. If if a finite set, show that the
The nonexistence of (i) semirelevant subsets in Example 10.4.1 and(ii) relevant subsets in Example 10.4.2 extends to randomized conditioning procedures.
Instead of conditioning the confidence sets θ ∈ S(X) on a set C, consider a randomized procedure which assigns to each point x a probabilityψ(x) and makes the confidence statement θ ∈ S(x)
Suppose X1 and X2 are i.i.d. with P{Xi = θ − 1} = P{Xi = θ + 1} =1 2 .Let C be the confidence set consisting of the single point (X1 + X2)/2 if X1 = X2 and X1 − 1 if X1 = X2. Show that, for all
(i) Under the assumptions of the preceding problem, the uniformly most accurate unbiased (or invariant) confidence intervals for θ at confidence level 1 − α areθ = max(X(1) +d, X(n)) − 1 < θ
Let X1,..., Xn be independently distributed according to the uniform distribution U(θ, θ + 1).(i) Uniformly most accurate lower confidence bounds θ for θ at confidence level 1 − α exist and
Let X have probability density f (x − θ), and suppose that E|X|
Let X be a random variable with cumulative distribution function F. If E|X| < ∞, then 0−∞ F(x) dx and ∞0 [1 − F(x)] dx are both finite. [Apply integration by parts to the two integrals.]
(i) Verify the posterior distribution of given x claimed in Example 10.4.1.(ii) Complete the proof of (10.32).
In Example 10.4.1, check directly that the set C = {x : x ≤−k or x ≥ k} is not a negatively biased semirelevant subset for the confidence intervals (X −c, X + c).
In Example 10.3.3,(i) the problem remains invariant under G but not under G;(ii) the statistic D is ancillary.Section 10.4
Let V1,..., Vn be independently distributed as N(0, 1), and given V1 = v1,..., Vn = vn, let Xi (i = 1,..., n) be independently distributed as N(θvi, 1).(i) There does not exist a UMP test of H : θ
Let X1,..., Xm and Y1,..., Yn be positive, independent random variables distributed with densities f (x/σ) and g(y/τ ), respectively. If f and g have monotone likelihood ratios in (x, σ) and (y,
Let the real-valued function f be defined on an open interval.(i) If f is logconvex, it is convex.(ii) If f is strongly unimodal, it is unimodal.
Verify the density (10.16) of Example 10.3.2.
Suppose X = (U, Z), the density of X factors into pθ,ϑ(x) = c(θ, ϑ)gθ(u;z)hϑ(z)k(u,z), and the parameters θ, ϑ are unrelated. To see that these assumptions are not enough to insure that Z is
In the situation of Example 10.2.2, the statistic Z remains Sancillary when the parameter space is = {(λ, μ) : μ ≤ λ}.
In the situation of Example 10.2.3, X + Y is binomial if and only if = 1.
Assuming the distribution (4.22) of Section 4.9, show that Z is S-ancillary for p = p+/(p+ + p−).
A sample of size n is drawn with replacement from a population consisting of N distinct unknown values{a1,..., aN }. The number of distinct values in the sample is ancillary.
Let X, Y have joint density p(x, y) = 2 f (x) f (y)F(θx y), where f is a known probability density symmetric about 0, and F its cumulative distribution function. Then(i) p(x, y) is a probability
Let X be uniformly distributed on (θ, θ + 1), 0 < θ < ∞, let [X]denote the largest integer ≤ X, and let V = X − [X].(i) The statistic V(X) is uniformly distributed on (0, 1) and is therefore
In the preceding problem, suppose the probabilities are given by 12 345 6 1−θ6 1−2θ6 1−3θ6 1+θ6 1+2θ6 1+3θ6 Exhibit two different maximal ancillaries.
Consider n tosses with a biased die, for which the probabilities of 1,..., 6 points are given by 123456 1−θ12 2−θ12 3−θ12 1+θ12 2+θ12 3+θ12 and let Xi be the number of tosses showing i
An experiment with n observations X1,..., Xn is planned, with each Xi distributed as N(θ, 1). However, some of the observations do not materialize (for example, some of the subjects die, move away,
Let X, Y be independently normally distributed as N(θ, 1), and let V = Y − X and W =Y − X if X + Y > 0, X − Y if X + Y ≤ 0.(i) Both V and W are ancillary, but neither is a function of the
In the preceding problem, suppose that the densities of X under E and F are θe−θx and (1/θ)e−x/θ respectively. Compare the UMP conditional and unconditional tests of H : θ = 1 against K : θ
With known probabilities p and q perform either E or F, with X distributed as N(θ, 1) under E or N(−θ, 1) under F. For testing H : θ = 0 againstθ > 0 there exist a UMP unconditional and a UMP
Let X1,..., Xn be independently distributed, each with probability p or q as N(ξ, σ2 0 ) or N(ξ, σ2 1 ).(i) If p is unknown, determine the UMP unbiased test of H : ξ = 0 against K :ξ > 0.(ii)
The test given by (10.3), (10.8), and (10.9) is most powerful under the stated assumptions.
Under the assumptions of Problem 10.1, determine the most accurate invariant (under the transformation X = −X) confidence sets S(X) with P(ξ ∈ S(X) | E) + P(ξ ∈ S(X) | F) = 2γ.Find examples
Let the experiments of E and F consist in observing X : N(ξ, σ2 0 )and X : N(ξ, σ2 1 ) respectively (σ0 < σ1), and let one of the two experiments be performed, with P(E) = P(F) = 1 2 . For
In the regression model of Problem 7.8, generalize the confidence bands of Example 9.7.3 to the regression surfaces 1. h1(e1,..., es) = s j=1 e jβj ;2. h2(e2,..., es) = β1 + s j=2 e jβj
to the set of all contrasts.[Use the fact that the event |yi − y0| ≤ for i = 1,...,s is equivalent to the event|s i=0 ci yi| ≤ s i=1 |ci| for all (c0,..., cs) satisfying s i=0 ci =
In generalization of Problem 9.41, show how to extend the Dunnett intervals of
Dunnett’s method. Let X0 j (j = 1,..., m) and Xik (i = 1,..., s; k = 1,..., n) represent measurements on a standard and s competing new treatments, and suppose the X’s are independently
Construct an example [i.e., choose values n1 =···= ns = n and αparticular contrast (c1,..., cs)] for which the Tukey confidence intervals (9.150) are shorter than the Scheffé intervals (9.137),
to the present situation.
1. Let Xi j (j = 1,... n;i = 1,...,s) be independent N(ξi, σ2), σ2 unknown. Then the problem of obtaining simultaneous confidence intervals for all differences ξ j − ξi is invariant under G0,
In the preceding problem consider arbitrary contrasts ci ξi with ci = 0. The event&&X j − Xi−ξ j − ξi&& ≤ for all i = j (9.149)is equivalent to the event&&&ci Xi −ci ξi&&&
Tukey’s T -Method. Let Xi (i = 1,...,r) be independent N(ξi, 1), and consider simultaneous confidence intervals L[(i, j); x] ≤ ξ j − ξi ≤ M[(i, j); x] for all i = j. (9.145)The problem of
1. In Example 9.7.1, the simultaneous confidence intervals (9.133)reduce to (9.137).2. What change is needed in the confidence intervals of Example 9.7.1 if the v’s are not required to satisfy
1. In Example 9.7.2, the set of linear functions wiαi =wi(ξi· − ξ··) for all w can also be represented as the set of functions wi ξi·for all w satisfying wi = 0.2. The set of linear
1. The confidence intervals L(u; y, S) = ui yi − c(S) are equivariant under G3 if and only if L(u; by, bS) = bL(u; y, S) for all b > 0.2. The most general confidence sets (9.131) which are
Let Xi (i = 1,...,r) be independent N(ξi, 1).1. The only simultaneous confidence intervals equivariant under G0 are those given by (9.124).2. The inequalities (9.124) and (9.126) are equivalent.3.
1. For the confidence sets (9.114), equivariance under G1 and G2 reduces to (9.115) and (9.116) respectively.2. For fixed (y1,..., yr), the statements ui yi ∈ A hold for all (u1,..., ur) with u2
1. A function L satisfies the first equation of (9.106) for all u, x, and orthogonal transformations Q if and only if it depends on u and x only through ux, xx, and uu.2. A function L is
The Tukey T -method leads to the simultaneous confidence intervals&&X j· − Xi·−μj − μi&& ≤Cσˆ √sn(n − 1)for all i, j. (9.144)[The probability of (9.144) is independent of the μ’s
Show that the Tukey levels (vi) satisfy (9.95) when s is even but not when s is odd.
Prove Lemma 9.5.3 when s is odd.
In Lemma 9.5.2, show that αs−1 = α is necessary for admissibility.
1. For the validity of Lemma 9.5.1 it is only required that the probability of rejecting homogeneity of any set containing {μi1 ,..., μiv1} as a proper subset tends to 1 as the distance between the
In general, show Cs = C∗1 . In the case s = 2, show (9.67).Section9.5
Prove part (i) of Theorem 9.4.3.
In general, the optimality results of Section 9.4 require the procedures to be monotone. To see why this is required, consider Theorem 9.4.2(i). Show the procedure E to be inadmissible. Hint: One can
Under the assumptions of Theorem9.4.1, suppose there exists another monotone rule E that strongly controls the FWER, and such that Pθ{dc 0,0} ≤ Pθ{ec 0,0} for all θ ∈ ωc 0,0 , (9.143)with
We have suppressed the dependence of the critical constants C1,...,Cs in the definition of the stepdown procedure D, and now more accurately call them Cs,1,...,Cs,s. Argue that, for fixed s, Cs,j is
Prove Lemma 9.4.2.
Suppose (X1,..., Xs) has a multivariate c.d.f. F(·). For θ ∈ IRs, let Fθ(x) = F(x − θ) define a multivariate location family. Show that (9.55) is satisfied for this family. (In particular, it
Suppose you apply the BH method based on p-values pˆ1,... pˆs. If each p-value is actually recorded twice (so that you now have 2s p-values), how would the two applications of the BH method
Assume the joint distribution of p-values is PRDS on the set I of true null hypotheses.(i) Show that, for any increasing set D, P{(pˆ1,..., pˆs) ∈ D| ˆpi ≤ u} (9.142)is nondecreasing in u for
In Example 9.3.1, suppose the multiple testing problem specifies Hi : μi = 0 against Hj : μi > 0, with known. As in the example, assume all components of are nonnegative. Define p-values by pˆi =
The problem points to connections between methods that control the FDP in the sense of (9.49) and methods that control its expected value, the FDR.(i) Show, for any random variable X on [0, 1], we
If F is the number of false discoveries of some multiple testing procedure, then show the per-family error rate E(F) satisfies the crude inequalities P{F ≥ 1} ≤ E(F) ≤ s P{F ≥ 1} , where s is
The closure method starts with a family of tests of HK to produce a multiple decision rule. Conversely, given any multiple testing decision rule (not necessarily obtained by the closure method), one
For testing H1,..., Hs based on p-values pˆ1,..., pˆs, suppose the closure method is applied and large values of Tk = Tk (pˆi1 ,..., pˆik ) is used to test the intersection hypothesis HK , where
Verfiy that Hommel’s method as stated in Example 9.2.5 can be obtained by the closure method when using Simes’ tests for the intersection hypotheses.
As in Procedure 9.1.1, suppose that a test of the individual hypothesis Hj is based on a test statistic Tn,j , with large values indicating evidence against the Hj . Assume s j=1 ωj is not empty.
Show that the Holm method is a special case of the closure method by using the Bonferroni method to test intersection hypotheses.
Consider testing H1,..., Hs, with Hi specifying θi = 0 against twosided alternatives. In order to control the mixed directional familywise error rate in(9.15), a simple device it to consider the 2s
Show that a stepdown version of Tukey’s method and Duncan’s method controls the FWER.
In Example 9.1.7, verify that the stepdown procedure based on the maximum of X j /√σj,j improves upon the Holm procedure. By Theorem 9.1.3, the procedure has FWER ≤ α. Compare the two
Under the assumptions of Theorem 9.1.2 and independence of the pvalues, the critical values α/(s − i + 1) can be increased to 1 − (1 − α)1/(s−i+1).For any i, calculate the limiting value of
Show that, under the assumptions of Theorem 9.1.2, it is not possible to increase any of the critical values αi = α/(s − i + 1) in the Holm procedure(9.18) without violating the FWER.
Show that Duncan’s method controls the FWER and the mixed directional famliywise error rate at level α. Find an expression for the adjusted p-values for Duncan’s method.
Show that (9.14) implies (9.12). Investigate under what conditions the probability of a Type 3 error can be bounded by α/2.
(i) Under the assumptions of Theorem 9.1.1, suppose also that the pvalues are mutually independent. Show that the Sidák procedure which rejects any Hi for which pˆi < c(α,s) = 1 − (1 − α)1/s
(i) Generalize Theorem9.1.1 to the weighted Bonferroni method. Hint:Part (i) directly generalizes. To show (ii), let J = i with probability αwi and J = 0 with probability 1 − α. Let U ∼ U(0, 1)
Show that the Bonferroni procedure, while generally conservative, can have FWER = α by exhibiting a joint distribution for (pˆ1,..., pˆs) and satisfying(9.5) such that P{mini pˆi ≤ α/s} = α.
Provide the missing details in Example 8.7.4. What happens in the case a > 2c1−α?
Find the maximin monotone level α test in Example 8.7.3 for general . Also allow the region ω( ) to be generalized and have the form {θ : θi ≥i for some i}, where the i may vary with i.
Showing 300 - 400
of 5757
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Last