All Matches
Solution Library
Expert Answer
Textbooks
Search Textbook questions, tutors and Books
Oops, something went wrong!
Change your search query and then try again
Toggle navigation
FREE Trial
S
Books
FREE
Tutors
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Hire a Tutor
AI Tutor
New
Search
Search
Sign In
Register
study help
business
linear state space systems
Questions and Answers of
Linear State Space Systems
Verify the results in equations (6.18) and (6.24). Section 6.4
Extend equation (6.28) to the situation where there are identical cases and note the masking effect. Hier. Suppose that cases, and k are identical, express in terms of hand apply equations (6.27) and
Develop the following results associated with the Hat matrix: Show that H and H, are symmetric and idempotent with ranks + 1 and m. respectively.b. Use the partitioned form of the inverse to develop
Refer to the data from Exercise 4.7 after the transformation. Compute the eigenvectors and prepare scatter plots of the principal.com- ponents. h. Compare the results of principal component
Refer to the data from Exercise 4.6 after the transformation. Compute the eigenvectors and prepare scatter-plots of the principal.com- ponents.b. Compare the results of principal component
Refer to the survival data discussed in Chapter 4, Example 4.1, using LTIME as the response. Compute the eigenvectors and prepare scatter-plots of the principal componentsb. Compare the results of
Using the data from Exercise 4.15, but ignoring, contrast that situation with the one in Example 5.1 noting the high correlation between 2 and ay in each case. Is variable deletion suggested?
Using the example in Section 5.1.2, compute the prediction equations obtained by using ridge regression (c = 0.05) and by principal component regression deleting the variable associated with the
Compute the VIPs, eigenvalues and eigenvectors for Exercise 4.18 and comment on the relation between these quantities and the correlation among the regressors. Section 5.4
Compute the VIFs, eigenvalues and eigenvectors for Exercise 4.16 and comment on the relation between these quantities and the correlation among the regressors.
Suppose that the data on a response y and three predictors . . and are such that all predictors are equally correlated, Corriz, 1] = r. Transform to the new predictors =(1+2+3)(101+23-223)a.
In Appendix D, Table 12, we show data taken from incoming female cadets at one of the military academies. The data give body fat measurements and nine potential predictor variables.a. Develop a
In Section 5.3.2 we introduced the concept of orthogonal regression. This was defined as the relation between the standardized predictors defined by the eigenvector associated with the smallest
Refer to the pitprop data from Exercise 4.17. Compute the VIFs and eigenvalues.b. Determine the eigenvectors associated with the two smallest eigenvalues. and try to determine the nature of the
Refer to the cement data from Exercise 4.4.a. Compute the VIFs and eigenvalues and eigenvectors for the data Use both the regression among the predictors and the eigenvector associated with the
Refer to the steel data discussed in Exercise 4.3. a.Compute the VIFs, eigenvalues and eigenvectors for this example, and check for collinearity.b. Prepare the scatter-plot matrix of the principal
Analyze the body fat data, discussed in Section 5.2, following the outline established in Chapter 4. In particular, prepare residual plots, q~ plots, and added variable plots. Examine this
a. Write, in algebraic form, the relations between X, A, Z, and yas determined by the transformation P defined in Section 5.1.4.b. Determine the solution as defined by the transformed normal equation
The acceptance region for the F-statistic in Example 5.1 can be written as 206; 2,7) and note that the estimatora. Plot this region as a function of 3, and (1.118,-282) lies outside of the region and
Verify the parameter estimates, test statistics and prediction intervals in Example 5.1.
Verify the relations on C, following equation (4.62).
Verify the results in equation (4.51) and (4.52).
Gorman and Toman (1966) presented the data shown in correlation form in Appendix D, Table D.9 relating a response y to 10 predictors with N = 36 observations.a. Fit the linear model in terms of these
Jeffers (1967) presented data on the strength of N-180 pitprops (STR), used in underground mines, as a function of 13 physical measurements. The data are shown in correlation form in Appendix D,
Oliver (1967) was interested in predicting the retail value of lamb carcasses from measurements that were obtained prior to final cutting. Observations were taken on N-337 carcasses. The correlation
Mantel (1970) preferred backward elimination to forward selection. The following data illustrate a situation in which FS might fail to detect an optimal subset.a. Determine the correlation matrix,
Compute RRMS, and C, for each subset equation in Exercise 4.4. Determine the order of inclusion (exclusion) for the forward selection (backward elimination) methods. Examine the RMS, and Cp plots and
Compute R, RMS, and C, for each subset equation in Exercise 4.3. Determine the order of inclusion (exclusion) for the forward selection (backward elimination) methods. Examine the R, RMS, and C,
Construct added variable plots for each of the variables in Exercise 4.4 and discuss the implications. Section 4.5
Construct added-variable plots for each of the variables in Exercise 4.3 and discuss the implications
Consider the following extensions of the added-variable concept.a. Suppose we fit a model with m+1 predictors. Define H, as in (4.17) and regress the residuals, z, on the two columns, (2)-(I-H) (+).
Verify the relations in equations (4.22) and (4.23).
Verify the relations given at the end of Section 4.4.1 between the full regression and the added-variable regression.
The data shown in Appendix D. Table 19 give the deforestation rate,g, the population growth rate, x, and the per capita GNP, for 50 countries. Fit the linear regression of y on 1, and zb. Compute the
The data shown in Appendix D, Table 18 give the sediment yield, 3. runoff, z, and precipitation, for 19 streams in New Zealand.a. Fit the linear regression of y on 2, and 2. h. Compute the residuals
Montgomery and Peck (1992) present data on the gas mileage (MPG). displacement (DISP), horse power (HP), weight (WT), and type of transmission (TRAN) for 32 automobiles. The data are shown in
Draper and Smith (1981) analyze data on the amount of heat (HEAT) produced during hardening of cement as a function of the composition. Four variables (-) are measured as percent of the weight of the
A company produces a stainless steel product from sheets of stainless steel obtained from a supplier. Records indicate that the number of units (PROD) obtained from a sheet varies and seems to depend
Verify algebraically, the relation in (4.8). By viewing and ju as points in N-space, verify that this is an application of the Pythagorean theorem.
Describe the linear model for Example 4.1 in terms of standardized variables, and write the normal equations for this model using the correlations from Table 4.1. Transform the parameter estimates in
Weisberg (1985) gives the data, shown in Appendix D.17, that were collected by another physicist, Hooker, who conducted an experiment similar to that of Forbes. These data were collected at higher
In a study similar to that described in Exercise 3.4, the researcher recorded the batch size, and the time per unity, in hours for that batch as shown in the following table: 1 Data for Exercise 3.5
Neter, Wasserman, and Kutner (1989) give the following data relating the age (x) and plasma level of polyamine (y) for 25 healthy children (20 denotes newborn) Fit the linear model, y + B+e and
Use the results on the partitioned form of the inverse of the sweep method from Appendix A.1.10 to develop the statistic for the hypothesis, Hy-0in equation (3.32) for Atkinson's method. Section 3.5
Verify the variance stabilizing transformations in Table 3.1. Section 3.4
A shop repairs units in batches, and a study is made to examine the efficiency of the operation of using larger batches. In the data shown below, denotes the number of units in the batch and y, the
Data given in Appendix D, Table 15 for 50 U.S. oil refineries, show the monthly totals of gallons of water used (WATER) and the number of barrels of crude oil processed (PROD). Also shown is the
Note that the results of Exercise 2.30 do not apply if we have an unequal number of replicates, say n, at the input values. Noting that Varg] =/m suggest an extension using weighted least squares
Develop the distribution of the weighted least squares estimator 3, given in equation (3.7). Use this result to describe the confidence intervals for elements of 8, confidence intervals for a mean
A construction company wants to estimate the amount of dirt required to fill a large ravine. Horizontal (r)and vertical (y)coordinates of a typical cross- section of the ravine are given in the
The records for the men's 100-meter free style swim for the years 1954 through 2000 are shown in the following table. 100-meter free style swim records Year 56 Time 55.4 55.2 57 57 54.6 61 64 67 68
Suppose that we make n observations at each of k different values of our predictor. The data are given by the pairs (,,,), for i = 1, ---, kandj=1,... Let, denote the mean of the observations ata.
In a study of traffic problems in a major city, the researcher was interested in developing a relation between a congestion index (C) and the percentage of the population served (POP). Based on the
For the data shown in the B+B+ and test for lack of fit. following table, fit the model Display the results as in Table 2.9. Does a quadratic model offer an improvement over the linear model? How do
In Appendix D, Table D.4, we show the winning speeds for the Indianapolis 500 motor car race for the years 1911 through 1971. (Note that the race was not held during World Wars I and II.)a. Fit the
In Appendix D, Table D.3, we show data taken from records of the number of licensed vehicles and the number of fatalities in the United States for the years 1950 through 1979. Here NOV is the number
A study was made to investigate the relation between ingredients of cigarettes and the amount of carbon monoxide emitted. One cigarette was selected from each of 25 different types of cigarettes, and
In a study designed to develop an equation for predicting survival time of patients undergoing a liver surgery, 54 patients were randomly selected for analysis. Prior to the surgery, information was
Suppose that you wish to force the fitted equation to pass through the origin and fit the linear model in (2.95). Develop the expression for the estimate of Bb. For the following data compare the
Suppose that we want to fit a quadratic model with equally spaced inputs, x=1, k. Verify that the functions of these predictors in (2.92) are orthogonal and that the regression sum of squares is
In the analysis of the quadratic model for the particle board data in Section 2.7.1, we noted that the coefficient on the linear term in the centered, second-order model was the same as in the linear
Develop the F-statistic for the hypothesis, Ho B = 0 and compare with the t-statistics in Tables 2.13 and 2.14.
Use a graphical approach to illustrate the correlation between z and 2- and then do the same for (2-7) and (2-2).
Relate the coefficients in the centered and uncentered quadratic model.
Using the particle board data, fit the quadratic model and confirm mar 75. Plot the fitted data and the observed data on the same plot.
Use Bartlett's test, to check for variance homogeneity for the particle board data. Section 2.7
a For the first-order autoregressive model defined by (2.77), verify that the errors have the covariance structure in (2.76).b. Verify the expected values of the numerator and denominator of (2.79)
In (2.71), verify that ps*D, is the square of the Euclidean distance from to D
Verify that E[RSS] = (N-2) by using the result, El Ay] tr(AV ar[V])+E[y] AE[y]- Section 2.5
If we test two independent hypotheses, each at levela, show that the probability of rejecting at least one, even when the null hypotheses are true, is given by 1-(1-a). Justify the approximation of
Show that it is possible to have R = 0 and have a perfect fit to the data.
Show that RSS > RSS in (2.44). Give an intuitive argument for this based on the least squares concept.
Show how to use a regression program to develop the test statistic for the hypothesis, He BB in the form of (2.44) and confirm that it is equivalent to (2.42)
Show that the test statistics in (2.42) and (2.44) are equivalent for the case B = 0.
Write and solve the normal equations for Forbes data and confirm the AOV in Table 2.7.
Compute the correlation between y and, between and r, and between y and r.
For the simple linear regression model in centered form, determine the first and second moments of 7 and B- Section 2.4
Determine the moments of using (2.28) or (2.31). Show that and B, are uncorrelated.
Verify that the algebraic expression in equations (2.3)-(2.9) are equivalent to the matrix expression in equations (2.11)-(2.20).
Write the design matrix for the model in (1.25) for Example 1.3 assuming four observations on glue one and five observations on glue two.a. Verify the interpretation of B, B, and given for that
For the simple linear regression model, suppose we have data, (), i = 1,...,5.a. Determine the inner product Jr.b. Suppose we let w, -7, where is the sample mean of the 2. Consider the model Ely] = a
In Definition 1.4, of the multivariate normal density, let the elements of V-1 be denoted by and write the expression in the exponent in algebraic notation using summation notation. Section 1.4
Let V() be an arbitrary covariance matrix of size three. Write Vin the form of (1.17) using indicator matrices.
Suppose all variances are equal, Varly], and all covariances are equal, Covy. Verify that the matrices V =I and V = (UI) allow us to write the covariance matrix in the form of (1.17).
Let covariances, 1,2,3, be random variables with means J, variances and . Write the matrix M as described in (1.15) and show that E[M]=V = (vy).
Write the design matrices for the mean structures described by equations (1.6), (1.7) and (1.8) for the data = (t1, t2, t3, ta) and s = (81, 82, 83, 84).
14.5 This exercise illustrates the effect of the choice of the Wishart prior for the unstructured covariance matrix using the stroke recovery data.a. Fit a simple linear regression model to the
14.4a. Fit the random intercepts and slopes model to the stroke recovery data in WinBUGS. Create a scatter plot of the estimated mean random slopes against the random intercepts. Calculate the
14.3a. Run the Weibull model for the remission times survival data, but this time monitor the DIC.b. Create an exponential model for the remission times in WinBUGS.Monitor the DIC. Is the exponential
14.2 Prove that the latent variable model (14.2) is equal to the proportional odds model (8.17).
14.1 Confirm that setting φ1 = 1 in model (14.1) gives model (8.11).
13.4 This exercise is about calculating the deviance information criterion(DIC). The two key equations are (13.5) and (13.6) for the number of parameters (pD) and DIC, respectively
13.3 This exercise is an introduction to the R2WinBUGS package that runs WinBUGS from R (Sturtz et al. 2005). R2WinBUGS is an R add-on package which needs to be installed in R. The advantage of using
13.2 The purpose of this exercise is to create a chain of Metropolis–Hastings samples, for a likelihood, P(θ|y), that is a standard Normal, using symmetric and asymmetric proposal densities. A new
13.1 Reconsider the example on Schistosoma japonicum from the previous chapter. In Table 12.1 the posterior probability for H0 was calculated using an equally spaced set of values for θ. Recalculate
12.4 Reconsider Example 12.2.3 on overdoses among released prisoners. You may find the First Bayes software useful for answering these questions.a. Use an argument based on α −1 previous successes
12.3 Reconsider Example 12.2.2 about the cancer clinical trial. The 11 specialists taking part in the trial had an enthusiastic prior opinion that the median expected improvement in survival was 10%,
12.2 Show that the posterior distribution using a Normally distributed prior N(µ0,σ2 0) and Normally distributed likelihood N(µl,σ2 l) is also Normally distributed with meanµ0σ2 l + µlσ2 0σ2
12.1 Reconsider Example 12.1.4 on Schistosoma japonicum.a. Using Table 12.1 calculate the posterior probability for H1 for the following priors and observed data:
11.3 Data on the ears or eyes of subjects are a classical example of clustering—the ears or eyes of the same subject are unlikely to be independent. The data in Table 11.10 are the responses to two
Showing 500 - 600
of 1264
1
2
3
4
5
6
7
8
9
10
11
12
13