All Matches
Solution Library
Expert Answer
Textbooks
Search Textbook questions, tutors and Books
Oops, something went wrong!
Change your search query and then try again
Toggle navigation
FREE Trial
S
Books
FREE
Tutors
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Hire a Tutor
AI Study Help
New
Search
Search
Sign In
Register
study help
business
statistical techniques in business
Questions and Answers of
Statistical Techniques in Business
In Example 8.7.3, assume is the identity matrix. Calculate the minimum power of the likelihood ratio test over the region ω( ) and compare it to the maximin monotone test.
In Example 8.7.3, find the most powerful test for testing θ ∈ 0 against a fixed alternative θ = a and compute the power of this test. [The least favorable distribution puts mass one at the point
Suppose X1,..., Xn are i.i.d. N(μ, σ2) with both parameters unknown. Show that, for testing μ ≤ 0 against μ > 0, the one-sided t-test is not monotone increasing. Does the assumption (8.39) hold
In Example 8.7.1, show how one may add a region F to the rejection region E of the likelihood ratio test and still maintain the size of the test.
Suppose X = (X1,..., Xs) ∼ Pθ , where the Pθ form a multivariate location model. So Pθ is the distribution of Z + θ, where Z has a fixed (known)distribution in IR. For testing superiority as in
In Example 8.7.1, determine the likelihood ratio test for general , and show that it reduces to the test that rejects for large values of min(X1,..., Xs)when is the identity matrix. How do you
Show that a region ω is monotone increasing if and only if its complement is monotone decreasing. In the plane, how would you characterize the class of all monotone increasing regions?
is most stringent.Section 8.7
with each of the sets consisting of two points (ξ1, η1,σ),(ξ2, η2,σ) such thatξ1 = ζ − n m + nδ, η1 = ζ + m m + nδ;ξ2 = ζ + n m + nδ, η2 = ζ − m m + nδfor some ζ and δ.]
Let (Z1,..., ZN ) = (X1,..., Xm, Y1,..., Yn) be distributed according to the joint density (5.55), and consider the problem of testing H : η = ξagainst the alternatives that the X’s and Y ’s
Let {} be a class of mutually exclusive sets of alternatives such that the envelope power function is constant over each and that ∪ = − H , and let ϕ maximize the minimum power over
Existence of most stringent tests. Under the assumptions of Problem 8.1 there exists a most stringent test for testing θ ∈ H against θ ∈ − H .
Suppose X1,..., Xk are independent, with Xi ∼ N(θi, 1). Consider testing the null hypothesis θ1 =···= θk = 0 against max |θi| ≥ δ, for some δ > 0.Find a maximin level α test as
Suppose X has the multivariate normal distribution in Rk with unknown mean vector h and known positive definite covariance matrixC−1. Consider testing h = 0 versus |C1/2h| ≥ b for some b > 0,
Suppose that the problem of testing θ ∈ H against θ ∈ K remains invariant under G, that there exists a UMP almost invariant test ϕ0 with respect to G, and that the assumptions of Theorem
Let X = (X1,..., Xp) and Y = (Y1,..., Yp) be independently distributed according to p-variate normal distributions with zero means and covariance matrices E(Xi X j) = σi j and E(YiYj) = σi j .(i)
Suppose in Problem 8.30(i) the variance σ2 is unknown and that the data consist of X1,..., Xn together with an independent random variable S2 for which S2/σ2 has a χ2-distribution. If K is
Let X1, …, Xn be independent normal with means θ1, …, θn and variance 1.(i) Apply the results of the preceding problem to the testing of H : θ1 =···= θn =0 against K :θ 2 i = r 2, for any
To generalize the results of the preceding problem to the testing of H : f versus K : { fθ , θ ∈ ω}, assume:(i) There exists a group G that leaves H and K invariant.(ii) G¯ is transitive over
For testing H : f0 against K : { f1,..., fs}, suppose there exists a finite group G = {g1,..., gN } which leaves H and K invariant and which is transitive in the sense that given f j , f j(1 ≤ j,
Suppose X1,..., Xn are independent normal variables with Xi ∼N(ξi, 1). The null hypothesis specifies all ξi = 0. Fix an integer k ≥ 1. Suppose ωspecifies that at least k of the Xi have mean
Suppose Problems 8.24–8.25 are modified so that the one nonzero mean may ξ or −ξ . How do the results change?
against the alternatives K ={K1,..., Kn, K1,..., Kn}, where under Ki : ξ j = 0 for all j = i, ξi = −ξ , determine the UMP test under a suitable group G, and show that it is both maximin and
The UMP invariant test φ0 of Problem 8.24(i) maximizes the minimum power over K;(ii) is admissible.(iii) For testing the hypothesis H of
Let X1, …, Xn be independent normal variables with variance 1 and means ξ1, …, ξn, and consider the problem of testing H : ξ1 =···= ξn = 0 against the alternatives K = {K1,..., Kn}, where
(i) 9 In the preceding problem determine the maximin test if ω is replaced by aiμi ≥d, where the a’s are given positive constants.(ii) Solve part (i) with Var(Xi) = 1 replaced by Var(Xi) = σ2
Let X1, …, Xn be independent and normally distributed with means E(Xi) = μi and variance 1. The test of H : μ1 =···= μn = 0 that maximizes the minimum power over ω :μi ≥ d rejects when
Write out a formal proof of the maximin property outlined in the last paragraph of Section 8.3.Section 8.4
Evaluate the test (8.21) explicitly for the case that Pi is the normal distribution with mean ξi and known variance σ2, and when 0 = 1.
Prove the formula (8.15).
Show that there exists a unique constant b for which q0 defined by(8.11) is a probability density with respect to μ, that the resulting q0 belongs to P0, and that b → ∞ as 0 → 0.
Double-exponential distribution. Let X1, …, Xn be a sample from the double-exponential distribution with density 1 2 e−|x−θ|. The LMP test for testingθ ≤ 0 against θ > 0 is the sign test,
(i) Let X have binomial distribution b(p, n), and consider testing H : p = p0 at level α against the alternatives K : p/q ≤ 1 2 p0/q0 or ≥ 2p0/q0.For α = .05 determine the smallest sample size
Let x = (x1,..., xn), and let gθ (x,ξ) be a family of probability densities depending on θ = (θ1,...,θr) and the real parameter ξ , and jointly measurable in x and ξ . For each θ, let hθ (ξ
Let fθ (x) = θg(x) + (1 − θ )h(x) with 0 ≤ θ ≤ 1. Then fθ (x)satisfies the assumptions of Lemma 8.2.1 provided g(x)/h(x)is a nondecreasing function of x.
Let Z1,..., Zn be identically independently distributed according to a continuous distribution D, of which it is assumed only that it is symmetric about some (unknown) point. For testing the
Let the distribution of X depend on the parameters (θ , ϑ) =(θ1,...,θr, ϑ1,...,ϑs). A test of H : θ = θ 0 is locally strictly unbiased if for eachϕ, (a) βϕ(θ 0, ϕ) = α, (b) there exists
The following two examples show that the assumption of a finitesample space is needed in Problem 8.7.(i) Let X1, …, Xn be i.i.d. according to a normal distribution N(σ, σ2) and test H : σ = σ0
Locally uniformly most powerful tests. If the sample space is finite and independent of θ, the test ϕ0 of Problem 8.4(i) is not only LMP but also locally uniformly most powerful (LUMP) in the sense
A level-α test ϕ0 is locally unbiased (loc. unb.) if there exists 0 > 0 such that βϕ0 (θ ) ≥ α for all θ with 0 < d(θ ) < 0; it is LMP loc. unb. if it is loc.unb. and if, given any other
Under the setting of Problem3.35, determine the locally most powerful test.
Locally most powerful tests. 6 Let d be a measure of the distance of an alternative θ from a given hypothesis H. A level-α test ϕ0 is said to be locally most powerful (LMP) if, given any other
In Example 8.1.3, complete the argument using Corollary 8.1.1 to find the maximin test without assuming you already know the UMPI test. What if the alternative specifies n i=1 ξ 2 i ≥ δ2?
In Example 8.1.1, explain why the maximin test is not UMPU for the alternatives considered.
Existence of maximin tests.5 Let (X , A) be a Euclidean sample space, and let the distributions Pθ , θ ∈ , be dominated by a σ-finite measure over (X , A).For any mutually exclusive subsets H
Suppose (X1,..., Xp) have the multivariate normal density (7.51), so that E(Xi) = ξi and A−1 is the known positive definite covariance matrix. The vector of means ξ = (ξ1,...,ξp) is known to
Bayes character and admissibility of Hotelling’s T 2.(i) Let (Xα1,..., Xαp), α = 1, …, n, be a sample from a p-variate normal distribution with unknown mean ξ = (ξ1,...,ξp) and covariance
Extend the one-sample problem to the two-sample problem for testing whether two multivariate normal distributions with common unknown covariance matrix have the same mean vectors.
For testing a multivariate mean vector ξ is zero in the case where is known, derive a UMPI test.
The confidence ellipsoids (7.59) for (ξ1,...,ξp) are equivariant under the group of Section 7.9.
Verify that the density of W is given by (7.55).
Show that the statistic W given in (7.55) is maximal invariant. [Hint:If (X¯ , S) and (Y¯, T ) are such that X¯S−1X¯ = Y¯T −1 Y¯ , then a transformation C that transforms one to the other
If n ≤ p, the matrix S with (i, j) component Si,j defined in (7.53)is singular. If n > p, it is nonsingular with probability 1. If n ≤ p, the test φ ≡ α is the only test that is invariant
for suitable values of ρ1 and ρ2.Section 7.9
Let (X1 j1,..., X1 jn; X2 j1,..., X2 jn;...; Xaj1,..., Xajn), j =1,...,b, be a sample from an an-variate normal distribution. Let E(Xijk ) = ξi , and denote by ii the matrix of covariances of (Xi
Among all tests that are both unbiased and invariant under suitable groups under the assumptions of Problem 7.35, there exist UMP tests of(i) H1 : α1 =···= αa = 0;(ii) H2 : σ2 B/(nσ2 C + σ2)
suggests the mixed model Xijk = μ + αi + Bj + Ci j + Uijk with the B’s, C’s, and U’s as in Problem 7.34. Reduce this model to a canonical form involving X··· and the sums of
Formal analogy with the model of
leads to the model Xijk = μ + Ai + Bj + Ci j + Uijk (i = 1,..., a; j = 1,...,b, k = 1,..., n), where the A’s, B’s, C’s, and U’s are independent normal with mean zero and variances σ2 A, σ2
Permitting interactions in the model of
if ρ = σ2 b /(σ2 b + σ2), except that instead of being positive, ρ now only needs to satisfy ρ > −1/(p − 1).]
Under the assumptions of the preceding problem, determine the UMP invariant test (with respect to a suitable G) of H : ξi = ... = ξp.[Show that this model agrees with that of
Let (X1 j,..., Xpj), j = 1,..., n, be a sample from a p-variate normal distribution with mean (ξ1,...,ξp) and covariance matrix = (σi j), whereσ2 i j = σ2 when j = i, and σ2 i j = ρσ2 when
and the α’s are constants adding to zero, determine (with respect to a suitable group leaving the problem invariant)(i) a UMP invariant test of H : α1 =···= αa;(ii) a UMP invariant test of H
For the mixed model Xi j = μ + αi + Bj + Ui j (i = 1,..., a; j = 1,..., n), where the B’s and U’s are as in
Consider the additive random-effects model Xijk = μ + Ai + Bj + Uijk (i = 1,..., a; j = 1,..., b; k = 1,..., n), where the A’s, B’s, and U’s are independent normal with zero means and
Under the assumptions of the preceding problem, the null distribution of W∗ is independent of q and hence the same as in the normal case, namely, F with r and n − s degrees of freedom. [See
Consider the following generalization of the univariate linear model of Section 7.1. The variables Xi (i = 1,..., n) are given by Xi = ξi + Ui , where(U1,..., Un) have a joint density which is
Consider the mixed model obtained from (7.68) by replacing the random variables Ai by unknown constants αi satisfyingαi = 0.With (ii) replaced by (ii)α2 i /(nσ2 C + σ2), there again exist
Consider the model II analogue of the two-way layout of Section 7.5, according to which Xijk = μ + Ai + Bj + Ci j + Eijk (7.68)(i = 1,..., a; j = 1,..., b; k = 1,..., n), where the Ai , Bj , Ci j ,
The general nested classification with a constant number of observations per cell, under model II, has the structure Xijk··· = μ + Ai + Bi j + Cijk +···+ Uijk···, i = 1,..., a; j = 1,...,
If Xi j is given by (7.39) but the number ni of observations per batch is not constant, obtain a canonical form corresponding to (7.40) by letting Yi1 = √ni Xi·. Note that the set of sufficient
The tests (7.46) and (7.47) are UMP unbiased.
In the model (7.39), the correlation coefficient ρ between two observations Xi j , Xik belonging to the same class, the so-called intraclass correlation coefficient, is given by ρ = σ2 A/(σ2 A +
(i) The test (7.41) of H : ≤ 0 is UMP unbiased.(ii) Determine the UMP unbiased test of H : = 0 and the associated uniformly most accurate unbiased confidence sets for .
Let X1,..., Xn be independently normally distributed with common variance σ2 and means ξi = α + βti + γ t 2i , where the ti are known. If the coefficient vectors (t k1 ,..., t k n ), k = 0, 1,
Let X1,..., Xm; Y1,..., Yn be independently normally distributed with common variance σ2 and means E(Xi) = α + β(ui − ¯u), E(Yj) = γ + δ(vj −v)¯ , where the u’s and v’s are known
In a regression situation, suppose that the observed values X j and Yj of the independent and dependent variable differ from certain true values X j and Y jby errors Uj, Vj which are independently
In the three-factor situation of the preceding problem, suppose that a = b = m. The hypothesis H can then be tested on the basis of m2 observations as follows. At each pair of levels (i, j) of the
Let Xijk (i = 1,..., a; j = 1,..., b; k = 1,..., m) be independently normally distributed with common variance σ2 and mean E(Xijk ) = μ + αi + βj + γkαi = βj = γk = 0.Determine the
with Lemma 3.4.2.]
Let Xλ denote a random variable distributed as noncentral χ2 with f degrees of freedom and noncentrality parameter λ2. Then Xλ is stochastically larger than Xλ if λ 0, P{|Y + λ| ≤ z} ≤
In the two-way layout of Section 7.5 with a = b = 2, denote the first three terms in the partition of Xijk − Xi j·2 by S2 A, S2 B, and S2 AB, corresponding to the A, B, and AB effects (i.e.,
The linear-hypothesis test of the hypothesis of no interaction in a two-way layout with m observations per cell is given by (7.28).
Let Z1,..., Zs be independently distributed as N(ζi, a2 i ),i =1,...,s, where the ai are known constants.(i) With respect to a suitable group of linear transformations there exists a UMP invariant
If the variables Xi j (j = 1,..., ni;i = 1,...,s) are independently distributed as N(μi, σ2), then Eni (Xi· − X··)2= (s − 1)σ2 +ni (μi − μ·)2 , EXi j − Xi·2= (n − s)σ2.
Let X1,..., Xn be independently normally distributed with known variance σ2 0 and means E(Xi) = ξi , and consider any linear hypothesis with s ≤ n(instead of s < n which is required when the
Let Xi j (j = 1,..., mi) and Yik (k = 1,..., ni) be independently normally distributed with common variance σ2 and means E(Xi j) = ξi and E(Yi j) =ξi + . Then the UMP invariant test of H : = 0
Under the assumptions of Section 7.1 suppose that the means ξi are given byξi = s j=1 ai jβj, where the constants ai j are known and the matrix A = (ai j) has full rank, and where the βj are
Given any ψ2 > 0, apply Theorem 6.7.2 and Lemma 6.7.1 to obtain the F-test (7.7) as a Bayes test against a set of alternatives contained in the set 0 < ψ ≤ ψ2.Section 7.2
Use Theorem6.7.1 to show that the F-test (7.7) is α-admissible against : ψ ≥ ψ1 for any ψ1 > 0.
Best average power.(i) Consider the general linear hypothesis H in the canonical form given by(7.2) and (7.3) of Section 7.1, and for any ηr+1,...,ηs, σ, and ρ let S =S(ηr+1,...,ηs, σ : ρ)
(i) The noncentral χ2 and F distributions have strictly monotone likelihood ratio.(ii) Under the assumptions of Section 7.1, the hypothesis H : ψ2 ≤ ψ2 0 (ψ0 > 0 given) remains invariant under
Noncentral F- and beta-distribution.12 Let Y1,..., Yr; Ys+1,..., Yn be independently normally distributed with common variance σ2 and means E(Yi) =ηi (i = 1,...,r); E(Yi) = 0 (i = s + 1,..., n).(i)
Noncentral χ2-distribution.11(i) If X is distributed as N(ψ, 1), the probability density of V = X2 is PVψ (v) = ∞k−0 Pk (ψ) f2k+1(v), where Pk (ψ) = (ψ2/2)k e−(1/2)ψ2/k! and where f2k+1
Expected sums of squares. The expected values of the numerator and denominator of the statistic W∗ defined by (7.7) are E r i=1 Y 2 i r = σ2 +1 r r i=1 η2 i and E n i=s+1 Y 2 i n − s =
Consider the problem of obtaining a (two-sided) confidence band for an unknown continuous cumulative distribution function F.(i) Show that this problem is invariant both under strictly increasing and
If the confidence sets S(x) are equivariant under the group G, then the probability Pθ{θ ∈ S(X)} of their covering the true value is invariant under the induced group G¯ .
Let Xi j (j = 1,..., ni; i = 1,...,s) be samples from the exponential distribution E(ξi, σ). Determine the smallest equivariant confidence sets for(ξ1,..., ξr) with respect to the group X i j =
Let X1,..., Xn be a sample from the exponential distribution E(ξ, σ). With respect to the transformations X i = bXi + a determine the smallest equivariant confidence sets(i) for σ, both when size
Showing 400 - 500
of 5757
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Last