All Matches
Solution Library
Expert Answer
Textbooks
Search Textbook questions, tutors and Books
Oops, something went wrong!
Change your search query and then try again
Toggle navigation
FREE Trial
S
Books
FREE
Tutors
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Hire a Tutor
AI Tutor
New
Search
Search
Sign In
Register
study help
business
nonparametric statistical inference
Questions and Answers of
Nonparametric Statistical Inference
2.58. (Sec. 2.5) Suppose X(I) and X(2) of q and p - q components, respectively, have the densitywhere Q = (xO) - fL(I») , A II ( x(l) - fL(I») + (x(l) - fL(I») , A 12 ( x(2) - fL(2»)+ ( X(2) -
2.57. (Sec. 2.5) Inuariance of the partial correlation coefficient. Prove that P12.3, ... , P. is invariant under the transformations xi = ajxj + b;X(3) + cj ' a j > 0, i = 1,2, x(3)*= ex(3) +d,
2.56. (Sec. 2.5) Prove by matrix algebra that (22 23 F11-(2 12213) 232 233) 11-(21213) 21 1) - - 31 -(12-1323332) (22-233332)(21-22323331).
2.55. (Sec. 2.5) Show c8'( x(I)lxt2), x(3») = fL(l) + I 13I;:l (X(3) - fL(3»)+ (I12 - II3I 331 I 32 )( I22 - I23 I 331 I 32r l. [ X(2) - fL(2) - I23 I ;} (X(3) - fL(3»)] .
2.54. (Sec. 2.5) Use Problem 2.53 to show that x'I -IX = (X(I) - I12Ii2IX(2»)'I'/2( x(1) - I12Ii2IX(2») + x(2)'Iib(2).
2.53. (Sec. 2.5) Showwhere 13 = I12Iit [Hint: Use Theorem A.3.3 of the Appendix and the fact that I -1 is symmetric.] 0 -1 11-2 21-11-2 I - 11-2124 22 5- 2111 212 -B' (1 - 1),
2.52. (Sec. 2.5) Verify that I12Ii21 = -"'Ill "'12' where'" = I-I is partitioned similarly to I.
2.51. . (Sec. 2.5) Show that for any vector functicn h(x(2»)is positive semidefinite. Note this generalizes Theorem 2.5.3 and Problem 2.49. [x-h(x)][x-(x)]' - [x - &X\X][x - $x[x]'
2.50. (Sec. 2.5) Show that for any function h(X(2» and any joint distribution of Xi and X(2) for which the relevant expectations exist, the correlation between Xi and h(X(2») is not greater than
2.49. (Sec. 2.5) Show that for any function h(X(2») and any joint distribution of Xi and Xl2l for which the relevant expectations exist, J'[Xi - h(X{2))f = J'[Xi -g(X(2»)]2 + J'[g(X(2») -
2.48. (Sec. 2.5) Show that for any joint distribution for which the expectations exist and any function h( X(2») tha t[Hiw: In the above take the expectation first with respect to Xi conditional 011
2.47. (Sec. 2.5) Prove[Hint: Apply Theorem A.3.2 of the Appendix to the cofactors used to calculate u~ . P12-3-p 12 1122
2.46. (Sec. 2.5) Show 2 Pg+1 p = Bijg+1p Bj.q+1,....p ......
2.45. (Sec. 2.5) Show[Hint: Use (19) and (27) successively.] 1 R q+1..... p = (1 Pp) (1 - Pip-1.p) (1 - P.q-1.+2....p).
2.44. (Sec. 2.5) Give a necessary and sufficient condition for Rj •q + l ..... P = 0 in terms., of CT1.q+I ..... CTjp ' .
2.43. (Sec. 2.5) Provei = 1., ..• q, k = q + 1, ... , p. where CT/q+I ..... k-l.k+I ..... p.., C1jj.q+I ..... k-l.k+I ..... p. j=i.k. [Hint: Prove this for the special case k=q+l by using Problem
2.42. (Sec. 25) If P = 2, c~n there be a difference between the simple correlation between Xl and x2 and the multiple correlation between Xl and X(Z) = Xl?Explain.
2.41. (Sec. 2.5) Prove 1 - R~'23 = (1 - Pf3X1 - Pf2.3)' [Hint: Use the fact that the variance of Xi'in the conditional distribution given X2 and X3 is (1- Rf.23)CTlI']
2.40. (Sec. 2.5) Let (Xl' Xz) have the density n (xl 0, I) = I(xl , x2 ). Let the density of Xl given Xl =xl be f(x2Ixl)' Let the joint density of Xl' X 2• X3 be I(XI' xz)/(x3Ix l)' Find the
2.39. (Sec.2.5) Prove {3IZ.3 = CTI2'3/CTZZ'3 = PI3.ZCTI.Z/CT3.Z and {313.2 = CTI).2/U33.Z =PI3.2 CTI'Z/ CT3·2' where CT/k = CTjI·k·
2.38. (Sec. 2.5) Prove equality holds in Problem 2.37 if and only if I is diagonal.
2.37. (Sec. 2.5) PrOve Hadamard's inequality[Hint: Using Problem 2.36, prove III::;CTIIII22 I, where I zz is (p-l)x (p - 1), and apply induction.] !
2.36. (Sec. 2.5) Prove explicitly that if I is positive definite, - ] = - . 11 12 22
2.35. (Sec. 2.5) Find the multiple correlation coefficient between Xl and (X2 , X 3)in Problem 2.29.
2.34. (Sec. 2.5) Prove that 1 Pij 1-R+P ....... k,j=q+1,..., p. Pkj Pki Pkj
2.33. (Sec. 2.5) lnuariallceofthe mulliple cOITelalion coefficient. Prove that Ri .q+ l , "'~;P is an invariant characteristic of the multivariate normal distribution of Xi and X(2) under the
2.32. (Sec. 2.5)(a) Show that finding Ot to maXimize the absolute value of the correlation hetween Xi and Ot' X(2) is equivalent to maximizing (u;i)Ot)l subject to Ot'l:220t constant.(b) Find Ot by
2.31. (Sec. 2.5) Verify (20) directly from Theorem L.5.1.
2.30. (Sec. 2.5) In Problem 2.9, find the conditional distribution of Xl and X 2 given X3 =X3'
2.29. (Sec. 25) Let J..'- = 0 and(a) Find the conditional distribution of Xl and X 3, given X 2 = X2'(b) What is the partial correlation between XI and X3 given X2? 1. 21 0.80 1. 0.80 -0.40\ -0.56
2.28. (Sec. 2.5) In each part of Prohlem 2.6, find the conditional distribution of X given Y=y, find the conditional distribution of Y given X=x, and plot eaeh rcgr"s~'ion linc on the appr()priatc
2.27. (Sec. 2.4) Prove that if the joint (marginal) distribution of XI and X 2 is singular (that is, degenerate); then the joint distribution of XI' X 2, and X) is singular.
2.26. (Sec. 2.4) Let(a) Find a vector u '1= 0 so that Iu = O. [Hint: Take cofactors of any column.)(b) Show that an] matrix of the form G = (H u), where H is 3 X 2, has the property(e) Using (a) and
2.25. (Sec. 2.4) Let X have a (singular) normal distribution with mean 0 and covariance matrix(a) Prove I is of rank 1.(b) Find a so X = a'Y and Y has a nonsingular normal distribution, ane! give the
2.24. (Sec. 2.4) Let (XI'YI)',(X2'Y2)',(X3,Y3)' be independently distributed,(Xi' Y)' according to(a) Find the distribution of the six variables.(b) Find the distribution of (X, Y)'. 1062) i=1,2,3.
2.23. (Sec. 2.4) Let XI' ... ' XN be independently distributed with Xi having distribution N( f3 + 1Zi, (T 2), where :t;i is a given number, i = 1, ... , N, and ~izi = o.(a) Find the distribution
2.22. (Sec. 2.4) Let XI"'" XN be independently distributed, each according to IV( /L, (T 2).(a) What is the distribution of X = (Xi"'" X N )'? Find the vector of means and the covariance matrix.(h)
2.21. (Sec. 2.4) Let X = (XI' X2 )'. where XI = X and X 2 = aX + b and X has the distribution N(O,1). Find the cdf of X.
2.20. (Sec. 2.4) What is the distribution of XI + 2X2 - 3X3 when XI' X2 , X3 have the distribution defined in Problem 2.9?
2.19. (Sec. 2.4) What is the distribution of Z = X - Y when X and Y have each of the densities in Problem 2.6?
2.18. (Sec. 2.4)(a) Write the marginal density of X for each case in Problem 2.6.(b) Indicate the marginal distribution of X for each case in Problem 2.7 by th~notation N(a. b).(cl Write the marginal
2.17. (Sec. 2.4) Which densities in Problem 2.7 define distributions in which X and Yare independent?
2.16. (Sec. 2.4) Find necessary and sufficient conditions on A so that AY + X has a continuous cdf.
2.15. (Sec. 2.4) Show that when X is normally distributed the components are mutually independent if and only if the covariance matrix is diagonal.
2.14. (Sec. 2.3) COllcentratioll ellipsoid. Let the density of the p-component Y be f(y)=nw+ 1)/[(p+2)7T]~P for y'y5,p+2 and 0 elsewhere. Then $Y=O and "IT' = I (Problem 7.4). From this result prove
2.13. (Sec. 2.3) Prove that if Pij = p, i 4' j, i, j = 1, ... , p, then P ~ -l/(p -1).
2.12. (Sec. 2.3) Show that if Pr{X ~ o. y ~ OJ = a fOJ the distributionthen p = cos( I - 2 a hr. [Hint: Let X = U, Y = pU + J 1 - p2 V and verify p = cos 2 7T( ~ -a) geometrically.]
2.11. (Sec. 2.3) Suppose the scalar random variables XI"'" Xn are· independent and have a density which is a function only of x? + ... +x;. Prove that the Xi are normally distributed with mean 0 and
2.10. (Sec. 2.3) Prove that the principal axes of (55) of Section 2.3 are along the 450 and 1350 lines with lengths 2';c(1 + p) and 2';c(1 - p), respectively, by transforming according to Yt = (ZI +
2.9. (Sec, 2.3) Let b = O.(a) Write the density (23).(b) Find 1:. A 732 334 1 212
2.8. (Sec. 2.3) For each matrix A in Problem 2,7 find .C so that C'AC = /,
2.7. (Sec. 2.3) Find b and A so that the following densities can be written in the form of (23). Also find J.l.x, J.l.y, ux' cr." and Pxy- (a) exp(-[(x 1) + (y 2)]}. x ). 1 (b) 2.4 exp exp(- x 2/4
1.6. (Sec. 2.3) Sketch the ellipsl!s normal density with f(x, y) = 0.06, where f(x, y) is the bivariate(a) J.l.x = 1, J.l.y = 2, u} = 1, u/ = 1, Pxy = o.(b) J.l.x = 0, J.l.y = 0, u} = 1, u/ = 1, Pxy
2.5. esc-c. 2.2) Show that if the set XI"'" X, is independent of the set X,+I"'" Xp, then g(X,,..., X)h(XX) =&g (X,..., X,)&h(X+1, Xp).
2.4. (Sec. 2.2) Let F(x l , x2 ) be the joint cJf of Xl' X 2, and let Fj(x) be the marginal cdf of Xi' i = 1,2. Prove that if f~(x) is continuous, i = 1,2, then F(x I, t2) is continuous.
2.3. (Sec. 2.2) Let f(x, y) = C for x2 + y2 :s k 2 and 0 elsewhere. Prove C =1/(1re), JfX= (ty=O, gX2= gy2=k2/4, and gxy=O. Are X and Y mdeprrdent?
2.2. (Sec. 2.2) Lei f(x, y) = 2,O:sy:sx:s 1,= 0, olhelwise.Find:(a) F(x, y).(b) F(x).(c) f(x).(d) G(y).(e) g(y).(f) f(xiy).(g) f(yix).(h) gX"y"'.(i) Are X and Y independent?
2.1. (Sec. 2.2) Let f(x,y) = 1, O:sx:s 1, O:sy:s 1,= 0, otherwise.Find:(a) F(x, y).(b) F(x).(c) f(x).(d) f(xiy). [Note: f(xoiyo) = 0 if f(x o, Yo) = 0.1(e) rf;'X"Y"'.(f) Prove X and Yare independent.
What is MULTIVARIATE STATISTICAL ANALYSIS
6. Following Kozumi (2002), simulate data under an endogenous switching model with yi ∼ Po(xi + Ti + ui1), T ∗i= 1 + 2zi + ui2, xi ∼ N(0, 1); zi ∼ N(0, 1); ui,1:2 ∼ N(0,u), with (see
5. Suppose a binary response has true prevalence Pr(Y = 1) = π but that observed responses are subject to misclassification with probabilities α0 = Pr(y = 1|Y = 0), and α1 = Pr(y =0|Y = 1).
4. Consider the normal linear non-differential measurement error model for i = 1, . . . , n xi ∼ N(Xi , 1/τδ), yi ∼ N(β0 + β1Xi + β2Zi , 1/τε), Xi ∼ N(α0 + α1Zi , 1/τη).Assume flat
3. Generate data following the scheme used by Zellner (1971, p. 137) for i = 1, . . . , 20 points, namely yi = α + βXi + εi , Xi ∼ NμX, σ2 η, xi = Xi + δi , with α = 2, β = 1,μX = 5,
2. Data on corn yield y and nitrogen x are analysed by Fuller (1987, p. 18) who applies the identifiability restriction σ2δ= 57 in a normal linear measurement error model yi = β0 + β1Xi + εi ,
1. Consider the normal measurement error model for (y, X, x|Z) with yi |Xi , Zi ∼ Nα + βXi + γ Zi, σ2ε, xi |Xi ∼ NXi, σ2δ, Xi |Zi ∼ NμX + κ Zi, σ2η, where Z is error free. Show
9. In Example 14.10, apply a model with two sets of spatial effects and a constraint on the overall means, namely μb where si1 and si2 follow ICAR1 priors and are centred at each iteration. logit ()
8. Modify the analysis in Example 14.6 to allow for non-ignorable missingness – namely the probability of response varying over the eight complete cells.
7. Consider 2001 census data on religious adherence in the 33 London boroughs with K = 5 categories (Christian, Hindu and Sikh, Muslim, Other religion, No religion). The totals Sik by borough i , and
6. In Example 14.4 consider the following variant on (14.12), namely log(φi jk) = M + γi + δ j + ηk + αi j + (ω1i + ω2 j )ξk where ξ2 > ξ1 for unique labelling and each set of ω parameters
5. In Example 14.3, use an MNAR model to generate missing values in y2, namely Ri ∼ Bern(πi ), Probit(πi ) = η0 + η1Yi2, where η0 = 0, η1 = 1. At the imputation stage generate five complete
4. In Example 14.3, use the approximate Bayesian bootstrap to generate K = 5 imputed datasets and compare inferences on the pooled slope β.
3. In Example 14.2, try a trivariate factor model withwhere λ32 is unknown and the factors have an unknown covariance matrix. Does this modification affect model conclusions regarding correlations
2. In Example 14.1, consider the generalisation to taking the residual variance specific to dropout status and assess changes in inference regarding drug efficacy.
1. In Example 14.1, adapt the procedure suggested by Hedeker and Gibbons (1997) to obtain populationwide estimates of the fixed effects (Intercept, Time, Drug, and Drug × Time), averaging over the
8. In Example 13.11 (math dropout), try a linear trend model for the effect of female gender and compare its fit to the general time-varying regression effect model h(T = j |Xi ) =F(αj + Xiβj ).
7. In Example 13.10 (head and neck cancer), retaining the existing time partition, fit a random walk intercept model with the prior differentiated by treatment group. Second, redefine the partition
6. In Example 13.9 (small cell lung cancer), find the median survival times under each group in the two-group discrete mixture model (i.e. four possible median survival times, one for each group and
5. In Example 13.8 include a two-component discrete model varying on the Weibull slope as well as the regression intercept. Sample replicate times from this model to ascertain whether the 95%
4. Fit the gastric cancer data in Example 13.5 using a grid (J = 78 intervals) defined using every distinct failure time.
3. In Example 13.2 compare a 5-point discrete mixture on the log-logistic shape parameter with the variable scale model to downweight aberrant cases, namely ui ∼ L(ηi , 1/(κθi ))where θi are
2. In Example 13.1, assess the health status score effect for nonlinearity using one of the techniques from Chapter 10, for example a quadratic spline with knots at 25, 35, 45, 55, 65, 75 and 85. How
1. In Example 13.1, fit a non-proportional model where the Weibull shape parameter differs between squamous (α1) and the other cell types (a common parameter α2 for all three other types) (Aitkin
12.7 In Example 12.7 try a cubic structural model yi = β1 + β2Fi1 + β3F2 i+ β4F3 i+ ui and assess its predictive performance against the quadratic model. Try other values of k apart from 1000
12.6 Generate data according to the logit–logit latent trait model of Bartholomew(1987). There are 100 subjects and P = 5 binary items and Q = 1 factor. The generating program is model{ for (h in
12.5 In Example 12.6 (attitudes to science and technology) try a one factor model and assess its fit against the two factor model fitted above.
12.4 Repeat the latent trait analysis in Example 12.5 but apply the posterior predictive check procedure proposed by Ansari and Jedidi (2000). This involves 45 correlations based on odds ratios ω =
12.3 Consider a latent class analysis of the sexual attitudes data in Example 12.5 and compare the options C = 2 and C = 3 using a posterior predictive p test based on a simple chisquare criterion.
12.2 Consider the infant temperament study data of Rubin and Stern (1994), in the form of counts ni jk relating to three behaviour measures of N = 93 infants. These are motor activity at age 4 months
12.1 In Example 12.1 estimate the six equations of the measurement model with scale mixing(equivalent to Student t sampling) and degrees of freedom in each equation as additional unknowns in the
13. In Example 11.11 (scram rates) consider a model with ω varying over time, and taking{logit(ωt ), bt} to followa bivariate normal randomwalk. Omit the 10th year’s observations(namely replace
12. Consider three-wave data on a skin treatment trial (Saei and McGilchrist, 1998), with the responses yi t being on a 5-point ordinal scale and a categorical predictor namely clinic Ci(1–6) –
11. In Example 11.10 (second model) adopt a reduced model with autocorrelated ei jt excluded, but with multivariate normal and multivariate t (via scale mixing with unknown degrees of freedom) priors
10. In Example 11.9 extend the varying slope model to all research inputs (lags 1 to 5 as well as the contemporary effect), as in (11.11). Following the McNab et al. (2004) strategy, it may be
9. In Example 11.8 apply the augmented data method with λi constant over periods and assess fit as compared to using the subject- and time-specific scale parameters λi t . Also consider both models
8. In Example 11.8 apply the serial odds ratio model of Fitzmaurice and Lipsitz (1995). A possible partial code iswhere omega is a positive parameter. model for (i in 1:N) z[i,s,t] < zli, s, t]~ {
7. In Example 11.7 (firm investments), does the conclusion that a non-stationary AR1 model is preferred still hold true when permanent random subject effects are added to the model.Thuswith ui t ∼
6. In Example 11.6 (Indonesian rice farm data) assess gain from introducing AR1 errors (in addition to unstructured errors) in both random and fixed effects bi models. Also find the posterior
5. In the random intercept model ~ ~ yit = a + Xit+b; +uit, Ga(e, f) obtain the full conditionals with b; N(0, 0), uit ~ N(0,2), let y = (, ), t = 1/, th=1/. Then with Yo2N+1(80, 0G), th~Galeb, fp),
4. Analyse panel data on respiratory infections (Zeger and Karim, 1991), which involves a binary response, using a variable intercept and variable slope on time – see Exercise 11.4.odc. There are
3. In Example 11.3 consider a model introducing a nonlinear IQ effect. Thus with yi j ∼N(μi j , Vi j )What impact does this have on the level 2 variance of IQ slopes (i.e. the parameter b22)? ij =
Showing 1900 - 2000
of 4210
First
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
Last