All Matches
Solution Library
Expert Answer
Textbooks
Search Textbook questions, tutors and Books
Oops, something went wrong!
Change your search query and then try again
Toggle navigation
FREE Trial
S
Books
FREE
Tutors
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Hire a Tutor
AI Tutor
New
Search
Search
Sign In
Register
study help
business
nonparametric statistical inference
Questions and Answers of
Nonparametric Statistical Inference
5.17. (Sec. 5.6.2) Show that z'B-1z is a convex function of (z,B), where B is a positive definite matrix. [Hint: Use Problem 5.16.]
5.16. (Sec. 5.6.2) Let get) = f[lyl + (1 - t)Y2]' where fey) is a real.yalued functiun of the vector y. Prove that if get) is convex, then fey) is convex.
5.15. (Sec. 5.6.2) T 2-test as a Bayes procedure [Kiefer and Schwartz (1965)]. Let XI' ... , X N be independently distributed, each according to N( fl., I). Let TI 0 be defined by [fl.,::£] = [0,([
5.14. (Sec. 5.5) Use the data of Problem 4.41 to test the hypothesis that the mean head length and breadth of first sons are equal to those of ~econd sons at significance level 0.01.
5.13. (Sec. 5.3) Prove the statement in Section 5.3.6 that the T2-statistic is independent of the choice of C.
5.12. (Sec. 5.3) Using the data in Section 3.2, give a confidence region for fl. with confidence coefficient 0.95.
5.11. (Sec. 5.3) Use the data in Section 3.2 to test the hypothesis that neither drug has a soporific effect at significance level 0.01.
5.10. (Sec. 5.2.2) From Problems 5.5-5.9, verify Corollary 5.2.1.
5.9. (Sec.5.2.2) Verify that r=s/(1-s) multiplied by (N -0/1 has the noncentral F-distribution with 1 and N - 1 degrees of freedom and noncentrality parameter NT 2.
5.S. (Sec. 5.2.2) Prove that w has the distribution of the square of a multiple correlation between one vector and p - I vcctors in (N - O-space without subtracting means; that is, it has
,5.7. (Sec. 5.2.2) LetProve that U = s + (1 - s)w, whereHint: EV,= V*, where = v=v; -3-(-) * P i 1,
5.6. (Sec. 5.2.2) Let U = [T 2/(N - Dl![l + T 2/(N - 0]. Show that U ='YV'(W,)-IV'Y', where 'Y = (1/ {N, ... , 1/ {N) and Ndx XIN V= 1-(0)-
5.5. (Sec. 5.2.2) Let T 2=Ni'S-li, where i and S are the mean vector and covariance matrix of a sample of N from N(fL, l:). Show that T2 is distributed the same when fL is repla
5.4. (Sec. 5.2.2) Use Problems 5.2 and 5.3 to show that [T 2/(N - 1)][(N - p)/p]has the Fp. N_p-distribution (under the null hypothesis). [Note: This is the analysis that corresponds to Hotelling's
5.3. (Sec. 5.2 2) Letwhere U \> ••. , II N are N numbers and x\> ... , X N are independent, each with the distribution N(O, l:). Prove that the distribution of R2/O - R2) is independent of
5.2. (Sec. 5.2.2) Show that r2 /(N - 1) can be written as R2/O - R2) with the correspondences given in Table 5.1. Table 5.1 Section 5.2 Section 4.4 X0a - 1/N Zla (2) Za I a Nx B =xx 1-xa T N-1 P N =
5.1. (Sec. 5.2) Let x" be distributed according to N(". + \3(z" - z), l;), ex =1, ... , N, where z = (1/N)Ez". Let b =[l/Hz" -z)2]ExQ (za -Z),(N - 2)S =E[x,,-i-b(za-z)][xa-i-b(za-Z)l'. and r 1 =Hza
4.49. Suppose X is distributed according to N(O, l:), whereShow that on the basis of one observation, x' = (Xl' x 2• X 3 ). we can obtain a confidence interval for p (with confidence coefficient 1
4.48. Missing observations. Let X = (Y' Z')', where Y has p components and Z has q components, be distributed according to N(Il-, l:), whereLet M observations be made on X, and N - M additional
4.47. (Sec. 4.3) Using the results in Problems 4.43-4.46, prove that the test for PI2.3 ....• p = 0 is equivalent to the usual I-test for 1'2 = O.
4.46. (Sec. 4.3) Prove that 1/a22.3 .. .. 1' is the element in the upper left-hand corner of -L 022 a (2) a(2) A 22
4.45. (Sec. 4.3) In the notation of Problem 4.44, proveHint: Use = a11-2.pa-(1) a (1) - c (an - (2) A'a (2) a11-3p-ca22-3.p
4.44. (Sec. 4.3) Let X' = (Xl' X2 , X(2)') have the distribution MII-,:n The conditional distribution of XI given X 2 = x2 and X(2) = X(2) isShow C2 = a12 .3, . . , pla22 .3, ... ,p. [Hint: Solve for
4.43. (Sec. 4.3) Prove that if Pij-q+I, ... ,p=O, then ..;N-2-(p-q)rij.q+I ..... pl';1 - r;}.q+ I , ... ,p is distributed according to the t-distribution withN - 2 - (p - q)degrees of freedom.
4.42. Let the components of X correspond to scores on tests in arithmetic speed(XI)' arithmetic power (X2 ), memory for words (X3 ), memory for meaningful symbols (X.1), and memory for meaningless
4.41. The estimates of .... and l: in Problem 3.1 are(a) Find the estimates of the parameters of the conditional distribution of (X3,X 4) given (xl,xz); that is, find SZISIII and S22'1 =S2Z
4.40. (Sec. 4.4) Prove that (47) is the unique unbiased estimator of R2 based on R2.
4.39. (Sec. 4.4) Prove that (30) is the uniformly most powerful test of R = 0 based on r. [Hint: Use the Neyman-Pearson fundamentallemma.l
4.38. (Sec. 4.4) Show that thl: density of rZ derived from (38) of Section 4.2 is identical with (42) in Section 4.4 for p = 2. [Hint: Use the duplication formula for the gamma function.l
4.37. (Sec. 4.4) Find the distribution of RZ /0 - RZ) by multiplying the density of Problem 4.35 by the dcnsity of all and intcgrating with respect to all'
4.36. (Sec. 4.4) Prove that the noncentrality parameter in the distribution in Problem 4.35 is (all/O"II)lF/(l-IF).
4.35. (Sec. 4.4) Prove that conditional on ZI" = ZI,,' a = 1, ... , n, RZ /0 - RZ) is distributed like T 2/(N* - 1), where T Z = N* i' S-I i based on N* = n observations on a vector X with p* = p - 1
4.34. (Sec. 4.4) Invariance of the sample multiple correlation coefficient. Prove that R is a fUllction of the sufficient statistics i and S that is invariant under changes of location and scale of x
4.33. (See. 4.3) II/variance of Ihe sample partial correiatioll coefficient. Prove that rlc .3 ..... p is invariant under the transformations x;a = aixia + b;x~) + ci' ai> 0, t' = 1, 2, x~')' = Cx~')
4.32. (Sl·C. 4.3) Show that the inequality rf~.3 s I is the same as the inequality Irijl ~ 0, where Irijl denotes the determinant of the 3 X 3 correlation matrix.
4.31. (Sec. 4.3.2) Use Fisher's = to test the hypothesis P12'34 = 0 against alternatives Plc.l• '" 0 at significance level 0.01 with r 12.34 = 0.14 and N = 40.
4.30. (Sec. 4.3.2) Find a confidence interval for P13.2 with confidence 0.95 based on rUe = 0.097 and N = 20.
4.29. (Sec. 4.2) Show that In ( 'ij - Pij)' (i, j) = (1,2), (1, 3), (2, 3), have a joint limiting distribution with variances (1 - Pi~)2 ann covaliances of rij and rik' j '" k being i(2pjk - PijPjk
4.28. (Sec. 4.2) Prove[Him: Use Problem 4.26 and the duplication formula for the gamma function.] (1-p)*" (n) Wi 8-0 p28+ (n+1)+ B ] BIT+B+1] n
4.27. (Sec. 4.2) The I-distribution. Prove that if X and Yare independently distributed, X having the distnbution N(O,1) and Y having the X2-distribution with m degrees of freedom, then W = XI JY 1m
4.26. (Sec. 4.2) Prove for integer h 2h-1 (1-p) (2p)+ [ (n + 1) + ](h++ ) (n) (2+1)! 3-0 T(n + h + B + 1) (1-p) = 00 (n) (2B)! B-0 (2p) (n+B) ( h + B + } ) (n+h+B)
4.25. (Sec. 4.2) Prove that (40) is the density d r. [Hint: In (31) let all = ue-L' and a22 = ueu; show that the density of v (0 ~ v Show that the integral is (40).] n 2 (1-p) (1 - p)+(1 r ) v (1
4.24. (Sec. 4.2) Prove that (39) is the density of r. [Hint: From Problem 2.12 showFinally show that the integral of(31) with respect to a II (= y 2 ) and a 22 (= z') is (39).] e-(y-2xyz+z) dydz =
4.23. (Sec. 4.2.2) Prove that the density of the sample correlation r [given by(38)] is[Hint: Expand (1 - prx)-n in a power series, integrate, and use the duplication formula for the gamma
4.22. (Sec. 4.2.2) Prove Up) and f2( p) are monotonically increasing functions of p.
4.21. (Sec. 4.2.1) Prove that if p = 0 8,2m = T[(N-1)](m+) (N-1)+m]
4.20. (Sec. 4.2) Prove that if l: is diagonal, then the sets rij and aii are independently distributed. [Hint: Use the facts that rij is invariant under scale transformations and that the density of
4.19. (Sec. 4.2) Prove r has a monotone likelihood ratio for r > 0, P > 0 by proving her) = kN(r, PI)/kN(r, P2) is monotonically increasing for PI > P2' Here her) is a constant times O:~:;:~Oca
4.18. (Sec. 4.2) Show that of all tests of P = Po against p> Po based on r, a procedure for which r> c implies rejection is uniformly most powerful.
4.17. (Sec.4.2) Show that of all tests of Po against a specific PI (> Po) based on r, the procedures for which r> c implies rejection are the best. [Hint: This follows from Problem 4.16.]
4.16. (Sec. 4.2) Let kN(r, p) be the density of the sample corrclation coefficient r for a given value of P and N. Prove that r has a monotone likelihood ratio; that is, show that if PI > P2' then
4.15. (Sec. 4.2.2). Prove that when N = 2 and P = 0, Pr{r = l} = Pr{r = -l} = !.
4.14. (Sec. 4.2.3) Use Fisher's z to obtain a confidence interval for p with confidence 0.95 based on a sample correlation of 0.65 and a sample size of 25.
4.13. (Sec.4.2.3) Use Fisher's z to estimate P based on sample correlations of -0.7(N = 30) and of - 0.6 (N = 40).
4.12. (Sec. 4.2.3) Use Fisher's z to test the hypothesis PI = P2 against the alternatives PI *" P2 at the 0.01 level with rl = 0.5, NI = 40, r2 = 0.6, Nz = 40.
4.11. (Sec. 4.2.3) Use Fisher's Z to test the hypothesis P = 0.7 against alternatives {' *" O.i at the 0.05 level with' r = 0.5 and N = 50.
4.10. (Sec. 4.2:2) Suppose N = 10, , = 0.795. Find a one-sided confidence interval for p [of the form ('0,1)] with confidence coefficient 0.95.
4.9. (Sec. 4.2.2) Using the data of Problem 3.1, find a (two-sided) confidence interval for P12 with confidence coefficient 0.99.
4.8. (Sec. 4.2.2) Tablulate the power function at p = -1(0.2)1 for the tests in Problem 4.6. Sketch the graph of each power function.
4.7. (Sec. 4.2.2) Tablulate the power function at p = -1(0.2)1 for the tests in Problf!m 4.5. Sketch the graph of each power function.
4.6. (Sec. 4.2.2) Find significance points for testing p = 0.6 at the 0.01 level with N = 20 observations against alternatives (a) p *- 0.6, (b) p> 0.6, and (c) p < 0.6.
4.5. (Sec. 4.2.0 Find the significance points for testing p = 0 at the 0.01 level with N = 15 observations against alternatives (a) p *- 0, (b) p> 0, and (c) p < O.
4.4. (Sec. 4.2.2) Suppose a sample correlation of 0.65 is observed in a sample of 20.Test the hypothesis that the population correlation is 0.4 against the alternatives that the population
4.3. (Sec. 4.2.1) Suppose a sample correlation of 0.65 is observed in a sample of 10.Test the hypothesis of independence against the alternatives of positive correlation at significance level 0.05.
4.2. (Sec. 4.2.1) Using the data of Problem 3.1, test the hypothesis that Xl and X2 are independent against all alternatives of dependence at significance level 0.01.
4.1. (Sec. 4.2.1) Sketchfor (a) N = 3, (b) N = 4, (c) N = 5, and (d) N = 10. KN(r)= [(N-1)] (N-1) (1-2) (N-4)
3.24. (Sec. 3.2) Covariance matrices with linear structure [Anderson (1969)]. Letwhere GO"'" Gq are given symmetric matrices such that there exists at least one (q + 1)-tuplet uo, u I ,. .. , uq such
3.23. Let Z(k) = (Zij(k», where i = 1, ... , p. j = 1. ... , q and k = 1. 2..... be a sequence of random matrices. Let one norm of a matrix A be N1(A) =max i . j mod(a), and another he N2(A) = L',j
3.22. (Sec. 3.5) Show that 0) 1 dx dy == f'(y)\ x-(ro (x). LS(y) (8x) 1 2 - dy, 1- dy.
3.21. (Sec. 3.5) Demonstrate Lemma 3.5.1 using integration by parts.
3.20. (Sec. 3.4) Show that P = = ( + N')' (N'r+'v). 1 (34+0)34+ (34+0)0 1-
3.19. (Sec. 3.4) Prove (l/N)L~_l(Xa - .... )(Xa - .... )' is an unbiased estimator of I when .... is known.
3.18. (Sec. 3.4) Prove l_~(~+l:)-l =l:(41+.l:)-1,~_~(~+l:)-l~=(~-l +l:-l)-l.113
3.17. (Sec. 3.2) Prove that Pr{IAI = O} = 0 for A defined by (4) when N > p. [Hint:Argue that if Z; = (ZI"'" Zp), then Iz~1 "" 0 implies A = Z;Z;' +r.;::p\ 1 ZaZ~ is positive definite. Prove Pr{l ztl
3.16. {Sec. 3.3) Prove that i and S have efficiency [(N -l)/N]p{p+I)/2 for estimating jl. and l:.
3.15. G'ec.. 3.3) Efficiency of the mean. Prove that i is efficient for estimating jl..
3.14. (Sec. 3.3) Prove that the power of the test in (J 9) is a function only of p and[NI N2/(NI + N2)](jl.(11 - jl.(21),l: -1(jl.(11 - jl.(21), given VI.
3.13. (Sec. 3.3) Let Xa be distributed according to N( 'Yca, l:), a = 1, ... , N, where r.c~ > O. Show that the distribution of g = (l/r.c~)r.caXa is N[ 'Y,(l/r.c~)l:].Show that E = r.a(Xa - gcaXXa -
3.12. (Sec. 3.2) Prove Lemma 3.2.2 by using Lemma 3.2.3 and showing N log I CI -tr CD has a maximum at C = ND -I by setting the derivatives of this function with respect to the elements of C = l: -I
3.11. (Sec. 3.2) Estimation of parameters of a complex normal distribution. Let ZI"'" ZN be N obseIVations from the complex normal distributions with mean 6 and covariance matrix P. (See Problem
3.10. (Sec. 3.2) Estimation of l: when jl. is known. Show that if XI"'" XN constitute a sample from N(jl., l:) and jl. is known, then (l/N)r.~_I(Xa - jl.XXa - jl.)' is the maximum likelihood
3.9. (Sec. 7.2) Show that(Note: When p = 1, the left-hand side is the average squared differences of the observations.) N N(N-1)(x-x) (x-x)'=(x*)(**)'.
3.8. (Sec. 3.2) Prove Lemma 3.2.2 by induction. [Hint: Let HI = h ll ,and use Problem 2.36.] H-1 h i-2,..., p. H h' (1) hii
3.7. (Sec. 3.2) In variance of the sample correlation coefficient. Prove that r 12 is an invariant characteristic of the sufficient statistics i and S of a bivariate sample under location and scale
3.6. Find fL, 1:, and ( P;j) for Iris setosa from Table 3.4, taken from Edgar Anderson's famous iris data [Fisher (1936)].
3.5. (Sec. 3.2) Let Xl be the body weight (in kilograms) of a cat and X2 the heart weight (in grams). [Data from Fisher (1947b).](a) In a sample of 47 female cats the relevant data areFind jl, t, S,
3.4. (Sec. 3.2) Use the facts that I C* I = n A;, tr C* = ~Ai' and C* = I if Al = ...= Ap = 1, where AI' ... ' Ap are the characteristic roots of C*, to prove Lemma 3.2.2. [Hint: Use f as given in
3.3. (Sec. 3.2) Compute ji, i, S, and P for the following pairs of observations:(34,55), (12, 29), (33, 75), (44, 89), (89, 62), (59, 69), (50, 41), (88, 67). Plot the observations.
3.2. (Sec. 3.2) Verify the numerical results of (21).
3.1. (Sec. 3.2) Find ji, i, and (Pi}) for the data given in Table 3.3, taken from Frets (1921).
2.68. (Sec. 2.7) For the multivariate I-distribution with density (41) show that GX= V- and C(X) = [m/(m - 2)]
2.67. (Sec. 2.2) Show that f''-.e-x'/2dx/& is approximately (l_e- 2a'/,,)1/2.[Hint: The probability that (X, Y) falls in a square is approximately the probability that (X, Y) falls in an
2.66. Show that the characteristic function of Z defined in Problem 2.64 iswhere l?ll(x + iy) = x. ER(Z)=eu-u* Pu
2.65 • Complex no/mal (continued). If Z has the complex normal distribution of,,"' Problem 2.64, show that W = AZ, where A is a nonsingular complex matrix, has the complex normal distribution with
2.64. Complex normal distribution. Let (X', Y')' have a normal distribution with mean vector (,ix, ,iy)' and covariance matrixwhere f is positive definite and III = - III' (skew symmetric). Then. Z =
2.63. (Sec. 2.6) Suppose X is distributed according to N(o, I). Let 1= (0"1"'" O"p)'Proveand E; is a column vector with 1 in the ith position and O's elsewhere. where (XX' XX')=+ vec (vec )' + =
2.62. (Sec. 2.6) Let the density of (X, Y) be 2n(xIO, l)n(yIO, 1), O!>y!>x < 00, O!> -x!>y < 00, O!> -y!> -xx!> -y
2.61. (Sec. 2.6) Verify (25) and (26) by using the transformation X - IJ. = CY, where 1= CC', and integrating the density of Y.
2.60. (Sec. 2.6) Let Y be distributed according to N(O, I). Differentiating the characteristic function, verify (25) and (26).
2.59. (Sec. 2.6) Prove Lemma 2.6.2 in detail. '"
Showing 1800 - 1900
of 4210
First
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
Last