All Matches
Solution Library
Expert Answer
Textbooks
Search Textbook questions, tutors and Books
Oops, something went wrong!
Change your search query and then try again
Toggle navigation
FREE Trial
S
Books
FREE
Tutors
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Hire a Tutor
AI Tutor
New
Search
Search
Sign In
Register
study help
business
linear state space systems
Questions and Answers of
Linear State Space Systems
Consider the constrained cell means model defined by Bij hije for 1,3,1,4, and r 1,,n, subject to the constraints - =13 s -414-24 Use both the model reduction and the Lagrange multiplier methods to
Referring to Example 17.2, with the design matrix X X(XX)X denote the Hat matrix, and define H, (JX2), let. H-(1/N)U. H. Use Cochran's theorem to determine the joint distribution of the quadratic
Referring to Example 17.3, use the results of Section 17.1.5 to express the model in terms of the parameters aro B, a B+B and og B-B In particular, specify the transformation matrix M and the new
Let Ny be the numerator sum of squares, from Theorem the hypothesis H: Hh in the unconstrained linear model. is an equivalent statement of this hypothesis, say, HB Na (HB-)(HB-). Hint: See Appendix
Consider the cell means model y = th + Cup 1 = 1,.,p./= 1, ..., n.a. Determine the test statistic for the hypothesis H = using both the model reduction and the Lagrange multiplier methods.b. Specify
Referring to Example 17.3, assume that the response vector is written in partitioned form as ya. Confirm the expression for the inverse of XTX, and determine the estimates of the parameter vector .b.
Refer to Example 17.2, where X (J | X).a. Show that the square of the sample correlation coefficient based on the vector of observations y and the vector of predicted values is equal to Rb. Show that
Determine the distribution of QB). Show that this quadratic form is not independent of Q(B), hence (17.66) is not proportional to an F-statistic.
Establish the independence of QB) and N as follows:a. Consider the special case 0, noting that both numerator and denominator can be written as quadratic forms on yb. In general, with 0, the
Verify:a. The expression for the likelihood ratio statistic in (17.66).b. The alternative ratio in (17.68).c. The distribution of the numerator sum of squares in (17.69).d. The independence of NH and
Consider the partitioned form of the linear model in Section 17.1.4, where B, and B, denote the estimates of 8, based on the full model and the model reduced by the constraint B = 0.a. Determine the
Generalize the results of Exercise 17.8 to the model y~ N(X,o). In particular, consider the distribution of zo Ay where A satisfies the conditions AA-IN- and AA=1-x(x*x)x and show that the maximum
Let y NJ, o) and let H, denote the Helmert matrix defined in Exercise 16.10, written in partitioned form as H.a. Determine the distribution of z Heyb. Using the marginal distribution of 24 determine
Develop the model reduction transformations in Section 17.1.5 for the general case g 0, and compare with the results of Section 17.1.2.
Equation (17.48) gives the difference in the residual sums of squares for the original model and the model under the constraint, B-0. Show that this result is a special case of equation (17.39).
Show that the constrained least squares estimator is BLUE. Hint: Consider a linear function of the form ey+d, determine the condition for unbiasedness, and then determine the values of c and d to
a. Show that B and B are both unbiased estimates of in the constrained model.b. Show that the matrix BG(XX) is positive definite.e. Establish the relation between QB) and Q(3) shown in (17.37) and
Verify the properties of the matrix A defined in Lemma 3.1.
a. Show that the expressions for the constrained estimator given in (17.27) and (17.33) are identical.b. Show that the covariance matrices for in Theorems 17.2 and 17.3 are identical.e. Show that the
Verify that the solution of the likelihood equations given in (17.9) maximizes the likelihood function. (Hint: See Appendix A.II.3.)
Consider the linear model defined by, Ey] = (L, J, J.) and with Varly) +V+0V V = U, Ia, U Define the quadratic forms, q,y Ay, by the matrices A = S, U -U A = S, Ua. Determine the distribution of
Consider the linear model defined in Example 16.3.a. Letting 44, +4, +43, show that AV = 1 and r(4) = (4.)-b. Verify that A- an (D.).c. Determine the expected values of these quadratic forms, and
Consider the linear model y Wu+e, where WI, J, and eN(0,0). Define the quadratic forms and by the matrices q, A = IS, and A = SUa. Determine the distributions of 1 and 2, and show that they are
Suppose that A and B are symmetric matrices.a. Show that there always exists a non-singular matrix, M, such that MTAM and MTBM are diagonal. Hint: Let C be a non-singular matrix such that CAC I, and
Let yN(, ), and let A be a symmetric matrix. Show that the distribution of the quadratic form q-Ay can be represented as a linear combination of independent, non-central chi-squared variables. In
Let y, and y be vectors of lengths p; and p, and let A be a p Pz matrix.a. Determine the expected value of the bilinear form 4 Hint: Let () and consider the quadratic form "By for an appropriate
Suppose that y~N(,), and let [ 10 4,1,2,3, where 1 10 and As 00 1 2a. Use Theorem 16.4. to determine the distribution of each quadratic form.b. Use Theorem 16.5 to show the pairwise independence of
Assume that the N-vector y NJ, 0), and let H be the Helmert matrix defined as follows: The first row of H is (1/N). The rth row, for -2,, N, is given by (F-1) T(T-1) %).a. Show that H is an
Prove Corollary 16.4.
a. Spell out the details of the development in the proof of Theorem 16.3.b. Verify the expression for the variance of q in Corollary 16.3, part 1, by differentiating the moment-generating function.c.
We have assumed that is non-singular since, if not, we can always use the linear relations in y to remove the redundancies and apply our results to the resulting quadratic form.a. Show how this can
The proof of the necessity half of Corollary 16.2, part 2 is tedious. To see the idea, consider the special case k = 2 and assume that 0,0 = 0. Equate the first two moments of both sides of the
Verify the first two moments of x2(N,6) given in Corollary 16.2 as follows:a. Use the density function from Theorem 16.2 to evaluate Elg] and Elg].b. Determine the first two derivatives of the
Derive the distribution of the non-central chi-squared statistic as follows;a. Assume that y~N(,), and let z Ay, where A is an orthogonal matrix whose first row is (1/1) with y 26. (This rotation
For the special case N = 2, determine the probability content of the sphere described by Equation (16.17). Use this result to confirm that qx2(2). Hint: Transform to polar coordinates.
Provide the details for the proof of Corollary 16.1. Section 16.3
Verify the results in Equations (16.5), (16.6), and (16.7). Section 16.2
Ostle and Malone (1988) describe an experiment designed to compare the yields of three varieties of a crop that was run as a randomized complete block design with four blocks. A covariate z, is the
For models with covariates we have recommended centering the covariate about some constant prior to introducing the artificial values. Show that the sum of squares of the augmented covariates is
Use a standard linear model program to fit the fixed effects analog of the model in Example 15.10.Data for Exercise 15.12 Populations Obs. 1 2 3 1.40 1.61 1.67 2 1.79 1.31 1.41 31 1.72 1.12 1.73 4
Use the general description of the AVE computational procedure in Section 14.5 to verify that the expressions in (15.107) and (15.118) are identical. Section 15.6
Use the equations described in Example 15.4 to analyze the data shown below, assuming the one-way classification random model. Begin the iteration with 6,- = 1. Missing observations are denoted by a
Use a standard linear model program, and compute the analysis of variance tables for the unbalanced data in Examples 15.4 and 15.5. Compare these tables with Table 15.4 and 15.5. In particular,
a. Verify the results in (15.98) and (15.99) for the one-way classification random model.b. Show that the test statistics in (15.92) and (15.96) are identical for the one- way classification model.
a. Establish the expected value in (15.94) by writing E[&] = E[(D)D'AD(D'")] and noting that D"= " = [ 17 ] M. 9, + DTX M+E[D] = D'Xa Var[D][M.M + D'X(X!VX.) 'X'D.b. Verify that the first term in
Spell out the details of the relation established in (15.90). Hint: Use results from the partitioned form of the inverse matrix in Appendix A.1.10, and note that we may write -- (mm)
Verify the simplification in (15.84) by noting that -8 and XIV M.-
Hartley (1967) proposed the following method for computing the expected value of mean squares for the mixed model with unbalanced data. For a given method of computing the 40V table, as if the model
Show that the solution to the MINQUE problem (15.40), is equivalent to one iteration on the REML equations. Hint: Use the method of Lagrange multipliers, with Lagrangian written as
Verify the relations in (15.34) - (15.36).
a. Verify that the second-order approximation is given by (15.19) and (15.20).b. Verify that E[h] (1/2) and hence the relation in (15.22).c. Verify that the information matrix is given by (15.24).d.
Write the linear model for the situation described by the index set T={1,2, 12, 3(12), 4(12), 3(12)4(12)).a. Describe the implied covariance structure.b. Write the AOV table, including the Kronecker
Write the linear model for the situation described by the index set T (1, 2, 12, 3(1), 23(1), 4(1), 24(1), 3(1)4(1), 23(1)4(1)).a. Describe the implied covariance structure.b. Write the AOV table,
For the four-factor model described in Example 14.10, suggest a possible third factor that might have been considered in the Kirk (1995) example.a. Write the linear model and describe the implied
Develop the analysis for the situation described in Example 14.9. In particular:a. Write the appropriate linear model and describe the implied covariance structure.b. Write the AOV table, including
Develop the analysis for the situation described in Example 14.8. In particular, Write the appropriate linear model and describe the implied covariance structure.b. Write the mean squares for Table
Recall Example 14.7 with four factors such that the first two factors are crossed, the third factor is nested in the first, and the fourth is nested in the third. Assume that the first two factors
Apply the AVE method to the Brownlee, three-factor data in Exercise 13.18.b. Determine the expected values of the AVE quadratic forms under the alternative definition of the variance components. Show
Use the general description of Lin (14.58) to describe the relation between y and A for the four-way classification model.
a. Verify that (14.54) provides the relation between the AOV expected mean squares and the expected values of the AVE quadratic forms.b. Use the Kronecker product definition of the mean squares in
Brownlee (1960) described an experiment designed to study two different annealing methods used in making of cans. Three coils of material were selected from the populations of coils made by each of
a. Apply the AVE diagnostic analysis to the Littell, nested-factorial data in Exercise 13.7. In particular, compute the estimate of for each of the four modes.b. Repeat the analysis if factor two,
a. Verify algebraically, using the AOV mean squares and expected values, that (14.38) is the estimate of (1)b. Write the test statistics for the marginal means hypotheses for main effects for Example
Write the AOV table for the two-fold nested model and compare the quadratic forms and the expected values with the AVE in Table 14.4. Section 14.4
Apply the AVE method to examine the variance component estimates in the Thompson-Moore, two-factor data in Exercise 13.5. In particular, note how W, explains the source of the negative estimate. Note
a. In Example 14.1 determine the matrix W, as if the fertilizer factor was random. Examine this matrix and the associated scatter-plot matrix for any unusual features.b. Describe the average of the
a. Verify the relation in (14.9) and the expression for , in (14.7).b. Verify the algebraic expression for d in (14.10).c. Show that T12 +12 is given by (+12)/na. Verify the expressions for 12 in
Develop the analysis for Example 13.7 assuming that the term (ed), is included in the model. Write the AOV table and compute the estimates of the variance components Data for Exercise 13.18 Tube ABC
Verify the expected mean squares in Table 13.8. What changes occur if we include the term (ed) N(0,a) in the model?#!# 13.18. Brownlee (1960) describes an experiment in bacteriological testing of
Prove Theorem 13.1. The sufficiency follows as in the proof in Section 13.5.2.2. For the necessity, show that the condition of the theorem is satisfied with 13.15 Verify the expression for Vin
Verify that the coefficient of 4; in (13.54) is given by (13.56). Hint: From (13.54), the matrices in 4 have coefficients A, and A for s 11. It is sufficient to verify that = As for s 1.b. Verify the
a. Verify the relation in (13.41) for the one- and two-way classification models.b. Suggest a general proof by mathematical induction.
Consider the three-way, cross-classification model with factors one and two and their interaction fixed and all other effects random.a. Describe the covariance structure algebraically and then in
To illustrate the differences in the various methods of writing confidence intervals, consider the one-way classification model described in Section 13.2 with mean squares given in Table 13.1.a.
Verify the confidence intervals for Example 13.2 given in Section 13.5. Note that you may use Table C-1, noting that x(a;d) da;d, oo) and F((-a); d,d)=1/F(a;d, ds).
Verify the expression for V in (13.36).
a. For the nested-factorial model in (13.27), verify the covariance structure in (13.28) and the matrix expressions in (13.29) and (13.31).b. Verify the expressions for the expected mean squares in
Thompson and Moore (1963) described a study that examined the muzzle velocity of a type of ammunition as a function of propelling charges and projectiles. The objective of the study was to examine
Using (13.26) write 100(1-a) % separate-t and Scheffe confidence intervals for machine differences as estimated by the marginal means.
Using the expression for V in (13.20), verify the expected mean squares in Table 13.4.
For the two-way classification random model, verify the covariance structure in (13.14) and the matrix expression in (13.15) and (13.29). Use the expressions for the sums of squares in Table 11.4 to
a. For the model in (13.2), verify the covariance structure in (13.3) and the matrix expressions in (13.4) and (13.7).b. Verify the expressions for the expected values of the quadratic forms in
For the nested factorial models defined by and Ta. Develop the hypothesis matrices for the usual preliminary tests.b. Develop the associated reparameterized model.c. Develop the AOV table including
a. Verify the relation in (12.94) by considering the various possibilities if rm. The examples and I will be useful.b. Verify the general expression for the matrix of the numerator sum of squares
Verify that the model for the nested factorial in Section 12.5 is given by the general notation in Section 12.6.
a. Write the transformation matrix for the nested factorial model associated with the hypotheses in Table 12.9 and determine the inverse of that matrix.b. Determine the parameter matrix and the basic
Hocking (1985) described an example for which the nested factorial model is appropriate. Observations are taken at laundromats using different types of washing machines and different detergents. For
In the two-fold nested model, with unequal but non-zero cell frequencies, the parameterization in (12.46) is often replaced by either of the following definitions of the parameters. Hip-Hab = - --
Determine the transformation matrix and its inverse for the parameters as defined in (12.63).
Describe the extension of the second form of reparameterization in (11.45) to the two-fold nested model.
Using the terminology of Example 12.2 for the unbalanced, two-fold nested model;a. Show that the sum of squares for testing the college-effect hypothesis defined by (12.60) is as given in Ne" in
Using the terminology of Example 12.2 for the unbalanced, two-fold nested model;a. Write the college-effect hypothesis in the form, He: in matrix form.b. Using the Lagrange multiplier method, or any
For the two-fold nested model, using the terminology of Example 12.2, verify that the numerator sums of squares for testing the hypothesis He and Hoc are given by R(a) and R(T). Also show that. R(a)
Extend the analysis of Example 12.2 to include simultaneous confidence intervals and ellipses.
a. Verify that the hypotheses for the three-factor factorial model may be written in matrix form as in (12.13).b. Use the general form for the numerator sum of squares and the matrix expression for
Consider the three-factor, cross-classification model with -a-3 and except 112 121 123 = #221 = Assume that the three-factor interaction constraints, (12.5), are satisfied. a = 2. 2 = 0.a. Determine
Describe the extension, to the three-factor model, of the parameterization in (11.45)
Describe the transformation, basic design, and parameter matrices for the three-factor, cross-classification model using the parameter definitions in (12.9).
Expand on the preliminary analysis of Example 12.1, shown in Table 12.3 by preparing interaction plots, developing simultaneous confidence intervals and simultaneous confidence ellipses based on the
Develop the algebraic expressions for the sums of squares and the associated expected mean squares in Table 12.1.
Showing 300 - 400
of 1264
1
2
3
4
5
6
7
8
9
10
11
12
13