All Matches
Solution Library
Expert Answer
Textbooks
Search Textbook questions, tutors and Books
Oops, something went wrong!
Change your search query and then try again
Toggle navigation
FREE Trial
S
Books
FREE
Tutors
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Hire a Tutor
AI Study Help
New
Search
Search
Sign In
Register
study help
business
statistical sampling to auditing
Questions and Answers of
Statistical Sampling To Auditing
29. Instead of conditioning the confidence sets 8 E S( X) on a set C, consider a randomized procedure which assigns to each point x a probability IJ!(x) and makes the confidence statement 8 E S( x)
28. (i) Under the assumptions of the preceding problem, the uniformly most accurate unbiased (or invariant) confidence intervals for 8 at confidence level 1 - a are fl = max( X(1) +d, X(II)) - 1 < 8
27. Let Xl" ' " x" be independently distributed according to the uniform distribution U(8, 8 + 1). (i) Uniformly most accurate lower confidence bounds fl for 8 at confidence level 1 - a exist and
26. Let X have probability density f(x - 8), and suppose that EIXI < 00. For the confidence intervals X - c < 8 there exist semirelevant but no relevant subsets. [Buehler (1959).]
25. Let X be a random variable with cumulative distribution function F. If EIXI < 00, then J~ ooF(x) dx and JO'[1 - F(x)] dx are both finite. [Apply integration by parts to the two integrals.]
24. (i) Verify the posterior distribution of 8 given x claimed in Example 9. (ii) Complete the proof of (32).
23. In Example 9, check directly that the set C = {x: x S - k or x k} is not a negatively biased semirelevant subset for the confidence intervals (X -c, X+ c).
22. In Example 8, (i) the problem remains invariant under G' but not under G3 ; (ii) the statistic D is ancillary
21. Let VI" ' " v" be independently distributed as N(O, 1), and given VI = VI " ' " v" = Vn , let X; (i = 1, .. . , n) be independently distributed as N(8vj ,1). (i) There does not exist a UMP test
20. Let XI"'" Xm and YI , . . . , Y,. be positive, independent random variables distributed with densities f(x/o) and g(y/'r) respectively. If f and g have monotone likelihood ratios in (x, 0) and (
19. Let the real-valued function f be defined on an open interval. (i) If f is logconvex, it is convex. (ii) If f is strongly unimodal, it is unimodal.
18. Verify the density (16) of Example 7.
17. Suppose X = (U, Z), the density of X factors into pu( x) = c(8 ,,'})go(u; z)h,,( z) k( u, z ), and the parameters 8, ,'} are unrelated. To see that these assumptions are not enough to insure that
16. In the situation of Example 2, the statistic Z remains S-ancillary when the parameter space is g = {(X, IL): IL X}.
15. In the situation of Example 3, X + Y is binomial if and only if Ii = 1.
14. Assuming the distribution (22) of Chapter 4, Section 9, show that Z is S-ancillary for p = P+/(P+ + p_).
13. A sample of size n is drawn with replacement from a population consisting of N distinct unknown values {al, . . . , aN }. The number of distinct values in the sample is ancillary.
12. Let X, Y have joint density p( x ,y) = 2f(x)f(y)F(8xy), where f is a known probability density symmetric about 0, and Fits cumulative distribution function. Then (i) p(x, y) is a probability
11. Let X be uniformly distributed on (8,8 + 1),0 < 8 < 00, let [X] denote the largest integer X, and let V = X - [X]. (i) The statistic V( X) is uniformly distributed on (0,1) and is therefore
10. In the preceding problem, suppose the probabilities are given by 1 1 - 8 6 2 1 - 28 6 3 1 - 38 6 4 1 + 8 6 5 1 + 28 6 6 1 + 38 6 Exhibit two different maximal ancillaries.
9. Consider n tosses with a biased die, for which the probabilities of 1, . . . , 6 points are given by 1 1 - 8 12 2 2 -8 12 3 3 - 8 12 4 1 + 8 12 5 2+8 12 6 3+8 12 and let X; be the number of tosses
8. An experiment with n observations XI" '" Xn is planned, with each X; distributed as N(8,1). However, some of the observations do not materialize (for example, some of the subjects die, move away,
7. Let X, Y be independently normally distributed as N(8 ,1), and let V= Y- X and W = { Y - X if X + Y > 0, X - Y if X + Ys O. (i) Both V and W are ancillary, but neither is a function of the other.
6. In the preceding problem, suppose that the densities of X under cf and 3&" are ee:" and (1/8)e- x / orespectively. Compare the UMP conditional and unconditional tests of H : 8 = 1 against K : 8 >
5. With known probabilities p and q perform either or 3&", with X distributed as N(8, 1) under or N( -8,1) under 3&". For testing H : 8 = 0 against 8> 0 there exist a UMP unconditional and a UMP
4. Let Xl" ' " X; be independently distributed, each with probability p or q as N(t aJ) or Na, af). (i) If p is unknown, determine the UMP unbiased test of H : = 0 against K : ~> O. (ii) Determine
3. The test given by (3), (8), and (9) is most powerful under the stated assumptions.
2. Under the assumptions of Problem 1, determine the most accurate invariant (under the transformation X' = - X) confidence sets S( X) with p(~ E S( X)I~ + p(~ E S( X) IS') = 2y. Find examples in
1. Let the experiments and 3&" consist in observing X : N(t aJ) and X : Na, at) respectively (ao < al ) , and let one of the two experiments be performed, with P(~) = P( 3&") = t. For testing H : = 0
32. The following two examples show that the assumption of a finite sample space is needed in Problem 31. (i) Let Xl' " '' Xn be i.i.d. according to a normal distribution N( 0,02 ) and test H : 0 =
31. Locally uniformly most powerful tests. If the sample space is finite and independent of 0, the test 'Po of Problem 2(i) is not only LMP but also locally uniformly most powerful (LUMP) in the
30. Suppose in Problem 29(i) the variance 0 2 is unknown and that the data consist of XI"'" Xn together with an independent random variable S2 for which S2/02 has a X2-distribution.1f K is replaced
29. Let Xl"' " Xn be independent normal with means 01 " " , On and variance 1. (i) Apply the results of the preceding problem to the testing of H : 01 = . . . = On = 0 against K :EO? = r2 , for any
28. To generalize the results of the preceding problem to the testing of H: 1 vs. K : {1o, 8 E w}, assume: (i) There exists a group G that leaves H and K invariant. (ii) Gis transitive over w. (iii)
27. For testing H :10 against K :{II" ' " Is}, suppose there exists a finite group G = {gl" '" gN } which leaves H and K invariant and which is transitive in the sense that givenh,fj' (1 S, j, j')
26. Show that the UMP invariant test of Problem 24 is most stringent.
25. The UMP invariant test 4>0 of Problem 24 (i) maximizes the minimum power over K; (ii) is admissible. (iii) For testing the hypothesis H of Problem 24 against the alternatives K' = {KI , ... , Kn
24. Let XI"'" X; be independent normal variables with variance 1 and means ~I' • • • , ~n' and consider the problem of testing H : ~I = . .. = ~n = 0 against the alternatives K = {KI , •
23. Let (ZI ' . '. ' ZN) "" (XI' .. . ' Xm , Y1, · · · , Yn ) be distributed according to the joint density (56) of Chapter 5, and consider the problem of testing H: 71 "" against the alternatives
22. Let {0t.} be a class of mutually exclusive sets of alternatives such that the envelope power function is constant over each 0t. and that Unto = °-0H' and let % maximize the minimum power over
21. Existence of most stringent tests. Under the assumptions of Problem 1 there exists a most stringent test for testing 8 E 0H against 8 E °-0H.
20. Suppose that the problem of testing 0 E 0H against 8 E OK remains invariant under G, that there exists a UMP almost invariant test CPo with respect to G, and that the assumptions of Theorem 3
19. Let X = (XI" . . , Xp ) and Y = (YI , .. . , ~) be independently distributed according to p-variate normal distributions with zero means and covariance matrices E(X;X.;) = aij and E(Y;lj) =
18.· (i) In the preceding problem determine the maximin test if to' is replaced by r.a i~d, where the a's are given positive constants. (ii) Solve part (i) with Var(X;) = 1 replaced by Var(X;) = al
17. Let XI" ' " Xn be independently normally distributed with means E(X;) = and variance 1. The test of H : ~I = . . . = ~n = 0 that maximizes the minimum power over w' :r.~ d rejects when r.X;c.
16. Write out a formal proof of the maximin property outlined in the last paragraph of Section 3.
15. Determine whether (21) remains the maximin test if in the model (20) G, is replaced by Gij'
14. Evaluate the test (21) explicitly for the case that Pi is the normal distribution with mean ~i and known variance a2 , and when ( 0 = (I'
13. Show that if 9'0 '" 9'1 and (0' (I are sufficiently small, then Qo '" QI'
12. Prove the formula (15).
11. Show that there exists a unique constant b for which qo defined by (11) is a probability density with respect to ~, that the resulting qo belongs to 9'0 ' and that b --+ 00 as (0 --+ O.
10. If (13) holds, show that ql defined by (11) belongs to 9'1'
9. Double-exponential distribution. Let Xl" ' " X; be a sample from the double-exponential distribution with density }e- 1x - O/. The LMP test for testing 8 S 0 against 8 > 0 is the sign test,
8. (i) Let X have binomial distribution b( p, n), and consider testing H : p = Po at level a against the alternatives OK :r/« S }Po/qo or 2po/qo . For a = .05 determine the smallest sample size for
7. Let x = (XI""'Xn ) , and let ge(x,~) be a family of probability densities depending on 8 = (81" • • , 8r ) and the real parameter ~, and jointly measurable in x and For each 8, let he(~) be a
6. Let le(x) = 8g(x) + (1 - 8)h(x) with 0 s 8 1. Then le(x) satisfies the assumptions of Lemma 1 provided g(x)/h(x) is a nondecreasing function of x.
5. Let ZI"'" Z; be identically independently distributed according to a continuous distribution D, of which it is assumed only that it is symmetric about some (unknown) point. For testing the
4. Let the distribution of X depend on the parameters «(J, ii) = «(JI' . . . , (Jr' iiI' ... , iis)' A test of H : (J = (J0 is locally strictly unbiased if for each ii, (a) fJ",«(Jo, ii) =a, (b)
3. A level-a test CPo is locally unbiased (loc. unb.) if there exists Ao > 0 such that P and if, given any other loc. unb. level-a test
2. Locally most powerful tests. Let d be a measure of the distance of an alternative 8 from a given hypothesis H. A level-a test CPo is said to be locally most powerful (LMP) if, given any other
1. Existence of maximin tests. Let (.¥, .91) be a Euclidean sample space, and let the distributions Pe, 8 E G, be dominated by a a-finite measure over (.¥, .91). For any mutually exclusive subsets
47. Bayes character and admissibilityof Hotelling's r: (i) Let (X"I"'" X"p), a = 1, . . . ; n, be a sample from a p-variate normal distribution with unknown mean = (~I" .. , ~p) and covariance matrix
46. The UMP invariant test of independence in part (ii) of the preceding problem is asymptotically robust against nonnormality.
45. Testing for independence. Let X = (Xa;). i = 1•. .. , P. a = 1,. . . , N. be a sample from a p-variate normal distribution; let q < p , max(q, P - q) N; and consider the hypothesis H that (Xli'
44. In generalization of Problem 8 of Chapter 7. let (Xvi' . ..• Xvp)' v = 1, . . . • n. be independent normal p-vectors with common covariance matrix and with means s " - '\' a a(i) lib,,; -
43. Consider the third of the three sampling schemes for a 2 X 2 X K table discussed in Chapter 4, Section 8. and the two hypotheses HI: iii = . .. = Ii K = 1 and Hz: iii = . . . = Ii K • (i)
42. In the situation of the preceding problem. consider the hypothesis of marginal homogeneity H' : Ps» = P+ j for all i, where P;+ = Li=IPij' P+j = Li- IPjj' (i) !he maximum-likelihood estimates of
41. The hypothesis of symmetry in a square two-way contingency table arises when one of the responses AI " ' " Au is observed for each of N subjects on two occasions (e.g. before and after some
40. In the situation of Example 7, consider the following model in which the row margins are fixed and which therefore generalizes model (iii) of Chapter 4, Section 7. A sample of n ; subjects is
39. In Example 7, show that the maximum-likelihood estimators Pij' Pi' and P; are as stated.
38. In the multinomial model (38), the maximum-likelihood estimators Pi of the P's are Pi = x Jn. [The following are two methods for proving this result: (i) Maximize log P( XI' . . . , x m ) subject
37. Let the equation of the tangent !J at 7T be Pi = 7T;(1 + ail~1 + ... +a s~s)' and suppose that the vectors (a;I" ' " a;s) are orthogonal in the sense that 'E.aikai/7Ti = 0 for all k if: I. 8.9]
36. Let Xl •. ..• x" be i.i.d. with cumulative distribution function F. let al < ' " < am -I be any given real numbers. and let ao = - 00, am = 00 . If "; is the number of X's in (aj_l , aj), the
35. The problem of testing the hypothesis H : 1/ E TIw against 11 E TIo- w ' when the distribution of Y is given by (34), remains invariant under a suitable group of linear transformations, and with
34. Consider the s-sample situation in which (x:.tl, .. ..x:.;l), v = 1•. . .• nk' k = 1, ... •s, are independent normal p-vectors with common covariance matrix I and with means (~lk) •. .
33. Write the simultaneous confidence sets (23) as explicitly as possible for the following cases: (i) The one-sample problem of Section 3 with 1/; = (i = 1, . . . , p). (ii) The two-sample problem
32. Under the assumptions made at the beginning of Section 6, show that the confidence intervals (33) (i) are uniformly most accurate unbiased. (ii) are uniformly most accurate equivariant, and (iii)
31. Prove that each of the sets of simultaneous confidence intervals (29) and (31) is smallest among all families that are equivariant under a suitable group of transformations.
30. The only simultaneous confidence sets for all U'1/v. U E U, v that are equivariant under the groups GI-G3 of the text are those given by (28). 494
29. Consider the special case of the preceding problem in which a = b = 1, and let V' = u' = (UI" ' " U.), V' = v' = (VI"' " vp). Then for testing Ho : u'1J*v = 0 there exists a UMP invariant test
28. Let X be an n X p data matrix satisfying the model assumptions made at the beginning of Sections 1 and 5, and let X" = ex, where e is an orthogonal matrix, the first s rows of which span TIo. If
27. As a different generalization, let (XA,.I" . . , XA,.p) be independent vectors, each having a p-variate normal distribution with common covariance matrix and with expectation E( XA,,; ) = p.(i) +
26. Generalize both parts of the preceding problem to the two-group case in which Xm (A = 1, . . . , nl) and Xm (p - 1, . .. , n2) are nl + n2 independent vectors, each having an ab-variate normal
25. Let XV;} ( i = 1, . .. , a; j = 1, . . . , b), v = 1, .. . , n, be n independent vectors, each having an ab-variate normal distribution with covariance matrix and with means given by E(X,,;J = p.
24. The assumptions of Theorem 6 of Chapter 6 are satisfied for the group (19) applied to the hypothesis H: '1/ = 0 of Section 5.
23. The probability of a type-I error for each of the tests of the preceding problem is robust against nonnormality: in case (i) as b ..... 00 ; in case (ii) as mb ..... 00 ; in case (iii) as m .....
22. Give explicit expressions for the elements of V and S in the multivariate analogues of the following situations: (i) The hypothesis (34) in the two-way layout (32) of Chapter 7. (ii) The
21. Let (X~t), . .. , x;,~», a = 1, . . . , nk' k = 1, .. . , s, be samples from p-variate d· ibuti F( /:(k) /:(k» ith fi . . . d istn uuons XI - "'I , • • • , xp - "'p Wl mte covanance
20. Let (X"I" ' " X"p), a = 1,... , n, be independently distributed according to p-variate distributions F( X"I - ~"l . . , x"p - t p ) with finite covariance matrix ~, and suppose the es satisfy
19. (i) If (13) has only one nonzero root, then B is of rank 1. In canonical form B=17/, and there then exists a vector (a1, .. . ,ap ) and constants C1' .• • , C" such that (65) (7/,,1 " '"
18. Under the assumptions of Problem 17, show that 1 IVI n 1 + A; = IV + SI . [The determinant of a matrix is equal to the product of its characteristic roots.]
17. Let V and S be p X P matrices, V of rank a .s; p and S nonsingular, and let A\, .. . , Aa denote the nonzero roots of IV - ASI = O. Then (i) 1-'; = 1/(1 + A;), i = 1, . . . ,a, are the a smallest
16. Verify the elements of V and S given by (14) and (15). \
15. Suppose X;'I = "; + 0.", where the t ,; are given by (62) and where (l!"I' . .. ' o.'p)' v = 1, . . . , n, is a sample from a p-variate distribution with mean 0 and covariance matrix I . The size
14. Let (1';'1 " , . , 1';'p)' v = 1, .. . , n, be a sample from a p-variate distribution F wit. h mean d covari . d I Z(II) - ~II Y / zero an covanance matnx ..., an et ; - "-, ,_\cl , /' ;
13. Simple multivariate regression. In the model of Section 1 with (62) '; = a; + fJ;tl , (v=I, .. . ,n ; i=I, .. . ,s), the UMP invariant test of H :fJl = . .. = fJp = 0 is given by (6) and (9),
12. Inversion of the two-sample test based on (12) leads to confidence ellipsoids for the vector (WI - ~p I, .. . , ~~2) - ~~l I) which are uniformly most accurate equivariant under the groups G1-G3
11. The two-sample test based on (12) is robust against heterogeneity of covariances as nl and n2 -> 00 when nl/n2 -> 1, but not in general.
10. The two-sample test based on (12) is robust against nonnormality as nl and n2 -> 00.
9. The confidence ellipsoids (11) for al"' " ~p) are equivariant under the groups G1-G3 of Section 2.
Showing 1 - 100
of 3033
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Last