All Matches
Solution Library
Expert Answer
Textbooks
Search Textbook questions, tutors and Books
Oops, something went wrong!
Change your search query and then try again
Toggle navigation
FREE Trial
S
Books
FREE
Tutors
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Hire a Tutor
AI Tutor
New
Search
Search
Sign In
Register
study help
business
linear state space systems
Questions and Answers of
Linear State Space Systems
Applying the SS decomposition with the projection matrix for the null model(Section 2.3.1), use Cochran’s theorem to show that for y1,…, yn independent from N(????, ????2), ȳ and s2 are
Suppose z = x + y where z ∼ ????2 pand x ∼ ????2 q. Show how to find the distribution of y.
If T has a t distribution with df = p, then using the construction of t and F random variables, explain why T2 has the F distribution with df1 = 1 and df2 = p.
False discovery rate: For surveys of FDR methods and issues in large-scale multiple hypothesis testing, see Benjamini (2010), Dudoit et al. (2003), and Farcomeni (2008).EXERCISES 3.1 Suppose y ∼
Boole, Bonferroni, Tukey, Scheffe´: Seneta (1992) surveyed probability inequalities presented by Boole and Bonferroni and related results of Frechet. For an overview of ´Tukey’s contributions to
General linear hypothesis: For further details about tests for the general linear hypothesis and in particular for one-way and two-way layouts, see Lehmann and Romano (2005, Chapter 7) and Scheffe
Fisher and ANOVA: Application of ANOVA was stimulated by the 1925 publication of R. A. Fisher’s classic text, Statistical Methods for Research Workers. Later contributions include Scheffe (1959)
Independent normal quadratic forms: The Cochran’s theorem implication that {yTPi y}are independent when Pi Pj = 0 extends to this result (Searle 1997, Chapter 2): When y ∼ N(????, V), yTAy and
Cochran’s theorem: Results on quadratic forms in normal variates were shown by the Scottish statistician William Cochran in 1934 when he was a 24-year old graduate student at the University of
In later chapters, we use functions in the useful R package, VGAM. In that package, the venice data set contains annual data between 1931 and 1981 on the annual maximum sea level (variable r1) in
Refer to the anorexia study in Exercise 1.24. For the model fitted there, interpret the output relating to predictive power, and check the model using residuals and influence measures. Summarize your
The horseshoe crab dataset17 Crabs3.dat at www.stat.ufl.edu/~aa/glm/data collects several variables for female horseshoe crabs that have males attached during mating, over several years at Seahorse
Download from the text website the data file Crabs.dat introduced in Section 1.5.1. Fit the linear model with both weight and color as explanatory variables for the number of satellites for each
A data set shown partly in Table 2.4 and fully available in the Optics.dat file at the text website is taken from a math education graduate student research project. For the optics module in a high
Exercise 1.21 concerned a study comparing forced expiratory volume (y =fev1 in the data file FEV.dat at the text website) for three drugs, adjusting for a baseline measurement. For the R output
Write a simple program to simulate data so that when you plot residuals against x after fitting the bivariate linear model E(yi) = ????0 + ????1xi, the plot shows inadequacy of (a) the linear
In some applications, such as regressing annual income on the number of years of education, the variance of y tends to be larger at higher values of x.Consider the model E(yi) = ????xi, assuming
The Gauss–Markov theorem shows the best way to form a linear unbiased estimator in a linear model. Are unbiased estimators always sensible? Consider a sequence of independent Bernoulli trials with
A genetic association study considers a large number of explanatory variables, with nearly all expected to have no effect or a very minor effect on the response. An alternative to the least squares
Extend results in Section 2.3.4 to the r × c factorial with n observations per cell.a. Express the orthogonal decomposition of yijk to include main effects, interaction, and residual error.b. Show
Section 2.4.5 considered the “main effects” model for a balanced 2×2 layout, showing there is orthogonality between each pair of parameters when we constrain ∑i ????i = ∑j ????j = 0.a. If
In the model for the balanced one-way layout, E(yij) = ????0 + ????i with identical ni, show that {????i} are orthogonal with ????0 if we impose the constraint∑i ????i = 0.
For the two-way r × c layout with one observation per cell, find the hat matrix.
Consider the main-effects linear model for the two-way layout with one observation per cell. Section 2.3.4 stated the projection matrix Pr that generates the treatment means. Find the projection
From the previous exercise, setting ????0 = 0 results in {????̂i = ȳi}. Explain why imposing only this constraint is inadequate for models with multiple factors, and a constraint such as ????1 = 0
Refer to the previous exercise. Conduct a similar analysis, but making parameters identifiable by setting ????0 = 0. Specify X and find PX and yT(I − PX)y.
In studying the model for the one-way layout in Section 2.3.2, we found the projection matrices and sums of squares and constructed the ANOVA table.a. We did the analysis for a non-full-rank model
a. Give an example of actual variables y, x1, x2 for which you would expect????1 ≠ 0 in the model E(yi) = ????0 + ????1xi1 but ????1 ≈ 0 in the model E(yi) =????0 + ????1xi1 + ????2xi2 (e.g.,
Consider the leverages for a linear model with full-rank model matrix and p parameters.a. Prove that the leverages fall between 0 and 1 and have a mean of p∕n.b. Show how expression (2.10) for hii
Show that an observation in a one-way layout has the maximum possible leverage if it is the only observation for its group.
Derive the hat matrix for the centered-data formulation of the linear model with a single explanatory variable. Explain what factors cause an observation to have a relatively large leverage.
In Section 2.5.1 we noted that for linear models containing an intercept term, corr(????̂ ,e) = 0, and plotting e against ????̂ helps detect violations of model assumptions. However, it is not
For a linear model with p explanatory variables, explain why sample multiple correlation R = 0 is equivalent to sample corr(y, x∗j) = 0 for j = 1,…, p.
Section 1.4.2 stated “When X has full rank, ???? is identifiable, and then all linear combinations ????T???? are estimable.” Find a such that E(aTy) = ????T???? for all ????.
Verify that the n × n identity matrix I is a projection matrix, and describe the linear model to which it corresponds.
In complete contrast to the null model is the saturated model, E(yi) = ????i, i = 1,…, n, which has a separate parameter for each observation. For this model:a. Specify X, the model space C(X), and
In Section 2.3.1 we showed the sum of squares decomposition for the null model E(yi) = ????, i = 1,…, n. Suppose you have n = 2 observations.a. Specify the model space C(X) and its orthogonal
Using the normal equations for a linear model, show that SSE decomposes into (y − X????̂)T(y − X????̂) = yTy − ????̂TXTy.Thus, for nested M1 and M0, explain why SSR(M1 ∣ M0) = ????̂T 1XT
Suppose that all the parameters in a linear model are orthogonal (Section 2.2.4).a. When the model contains an intercept term, show that orthogonality implies that each column in X after the first
Two vectors that are orthogonal or that have zero correlation are linearly independent. However, orthogonal vectors need not be uncorrelated, and uncorrelated vectors need not be orthogonal.a. Show
In R3, let W be the vector subspace spanned by (1, 0, 0), that is, the “xaxis” in three-dimensional space. Specify its orthogonal complement. For any y in R3, show its orthogonal decomposition y
When X has less than full rank and we use a generalized inverse to estimate ????, explain why the space of possible least squares solutions ????̂does not form a vector space. (For a solution,
When X does not have full rank, let’s see why PX = X(XTX)−XT is invariant to the choice of generalized inverse. Let G and H be two generalized inverses of XTX. For an arbitrary v ∈ Rn, let v =
Denote the hat matrix by P0 for the null model and H for any linear model that contains an intercept term. Explain why P0H = HP0 = P0. Show this implies that each row and each column of H sums to 1.
For a linear model with full rank X and projection matrix PX, show that PXX = X and that C(PX) = C(X).
From Exercise 1.17, if A is nonsingular and X∗ = XA (such as in using a different parameterization for a factor), then C(X∗) = C(X). Show that the linear models with the model matrices X and X∗
For a full-rank model matrix X, show that rank(H) = rank(X), where H =X(XTX)−1XT.
Prove that I − 1 n 1n1T n is symmetric and idempotent (i.e., a projection matrix), and identify the vector to which it projects an arbitrary y.
For a projection matrix P, for any y in Rn show that Py and y − Py are orthogonal vectors; that is, the projection is an orthogonal projection.
In an ordinary linear model with two explanatory variables x1 and x2 having sample corr(x∗1, x∗2) > 0, show that the estimated corr(????̂1, ????̂2) < 0.
By the QR decomposition, X can be decomposed as X = QR, where Q consists of the first p columns of a n × n orthogonal matrix and R is a p × p upper triangular matrix. Show that the least squares
Refer to the analysis of covariance model ????i = ????0 + ????1xi1 + ????2xi2 for quantitative x1 and binary x2 for two groups, with xi2 = 0 for group 1 and xi2 = 1 for group 2. Denote the sample
For the model in Section 2.3.4 for the two-way layout, construct a full-rank model matrix. Show that the normal equations imply that the marginal row and column sample totals for y equal the row and
In the linear model E(yi) = ????0 + ????1xi, consider the fitted line that minimizes the sum of squared perpendicular distances from the points to the line. Is this fit invariant to the units of
In the linear model E(yi) = ????0 + ????1xi, suppose that instead of observing xi we observe x∗i = xi + ui, where ui is independent of xi for all i and var(ui) = ????2 u .Analyze the expected
Consider the least squares fit of the linear model E(yi) = ????0 + ????1xi.a. Show that ????̂1 = [∑i(xi − x̄)(yi − ȳ)]∕[∑i(xi − x̄)2].b. Derive var(????̂1). State the estimated
In the linear model y = X???? + ????, suppose ????i has the Laplace density, f(????) =(1∕2b) exp(−|????|∕b). Show that the ML estimate minimizes ∑i |yi − ????i|.
For independent observations y1,…, yn from a probability distribution with mean ????, show that the least squares estimate of ???? is ȳ.
For 72 young girls suffering from anorexia, the Anorexia.dat file at the text website shows their weights before and after an experimental period.Table 1.5 shows the format of the data. The girls
Another horseshoe crab dataset9 (Crabs2.dat at www.stat.ufl.edu/~aa/glm/data) comes from a study of factors that affect sperm traits of male crabs. A response variable, SpermTotal, is measured as the
Refer to the analyses in Section 1.5.3 for the horseshoe crab satellites.a. With color alone as a predictor, why are standard errors much smaller for a Poisson model than for a normal model? Out of
Littell et al. (2000) described a pharmaceutical clinical trial in which 24 patients were randomly assigned to each of three treatment groups (drug A, drug B, placebo) and compared on a measure of
Show the first five rows of the model matrix for (a) the linear model for the horseshoe crabs in Section 1.5.2, (b) the model for a one-way layout in Section 1.5.3, (c) the model containing both
Consider the analysis of covariance model without interaction, denoted by 1 + X + A.a. Write the formula for the model in such a way that the parameters are not identifiable. Show the corresponding
For the linear model for the one-way layout, Section 1.4.1 showed the model matrix that makes parameters identifiable by setting ????1 = 0. Call this model matrix X1.a. Suppose we instead obtain
If A is a nonsingular matrix, show that C(X) = C(XA). (If two full-rank model matrices correspond to equivalent models, then one model matrix is the other multiplied by a nonsingular matrix.)
Explain why the vector space of p × 1 vectors ???? such that ????T???? is estimable is C(XT).
Refer to Exercise 1.12. Now suppose r = 2 and c = 4, but observations for the first two levels of B occur only at the first level of A, and observations for the last two levels of B occur only at the
For the model in the previous exercise with constraints ????1 = ????1 = 0, generalize the model by adding an interaction term ????ij.a. Show the new full-rank model matrix. Specify the constraints
Consider the model for the two-way layout shown in the previous exercise.Suppose r = 2, c = 3, and n = 2.a. Show the form of a full-rank model matrix X and corresponding parameter vector ???? for the
Consider the model for the two-way layout for qualitative factors A and B, E(yijk) = ????0 + ????i + ????j, for i = 1,…,r, j = 1,…,c, and k = 1,…, n. This model is balanced, having an equal
Show the form of X???? for the linear model for the one-way layout, E(yij) =????0 + ????i, using a full-rank model matrix X by employing the constraint ∑i ????i =0 to make parameters identifiable.
GLMs normally use a hierarchical structure by which the presence of a higher-order term implies also including the lower-order terms. Explain why this is sensible, by showing that (a) a model that
For the normal linear model, explain why the expression yi = ∑p j=1 ????jxij + ????i with ????i ∼ N(0, ????2) is equivalent to yi ∼ N(∑p j=1 ????jxij, ????2).
A model M has model matrix X. A simpler model M0 results from removing the final term in M, and hence has model matrix X0 that deletes the final column from X. From the definition of a column space,
For any linear model ???? = X????, is the origin 0 in the model space C(X)? Why or why not?
When X has full rank p, explain why the null space of X consists only of the 0 vector.
Suppose you standardize the response and explanatory variables before fitting a linear model (i.e., subtract the means and divide by the standard deviations).Explain how to interpret the resulting
Extend the model in Section 1.2.1 relating income to racial–ethnic status to include education and interaction explanatory terms. Explain how to interpret parameters when software constructs the
What do you think are the advantages and disadvantages of treating an ordinal explanatory variable as (a) quantitative, (b) qualitative?
Link function of a GLM:a. Describe the purpose of the link function g.b. The identity link is the standard one with normal responses but is not often used with binary or count responses. Why do you
Suppose that yi has aN(????i, ????2) distribution, i = 1,…, n. Formulate the normal linear model as a special case of a GLM, specifying the random component, linear predictor, and link function.
Reanalyze the auto accident data of Example 4.8.1 without treating any of the factors as a response factor.
Reanalyze the abortion attitude data of Exercise 4.8.4 without treating any of the factors as a response.
Using our discussion of graphical models and collapsibility in Chapter 5, argue that when the true model is [123][24][456],the test of marginal association for the u123(ijk)’s does not ignore any
Reanalyze the Intelligence versus Clothing table of Exercise 2.6.3 using the methods of Section 1. Note that this ignores the potentially complicating factor Standard and the complex sampling scheme.
For the model log(mij ) = u + u1(i) + u2(j) + γ1 i j + γ2 i 2j , find the log odds ratio log[mijmi+1 j+1mi+1 jmi j+1]in terms of the model parameters.
Duncan, Schuman, and Duncan (1973) and Duncan and McRae (1979) present data on evaluations made in 1959 and 1971 of the performance of radio and TV networks. The data are given in Table 7.5.Use the
Assuming the use of consecutive integer scores, find the log odds ratios log[mijkmi+1 j+1 kmi+1 jkmi j+1 k]TABLE 7.5. Radio and Television Network Performance.Respondent’s Performance of Networks
Use the methods of Section 3 to analyze the Intelligence– Clothing table of Exercise 2.6.3.
Brown (1980) presents data that are reproduced in Table 8.5, on a cross-classification of 53 prostate cancer patients. The factors are acid phosphatase level in the blood serum, age, stage, grade,
Extend your analysis of the Berkeley graduate admissions data (cf. Exercise 3.8.4) by incorporating the method of partitioning polytomous factors from Example 8.1.2.
Partitioning Two-Way Tables.Lancaster (1949) and Irwin (1949) present a method of partitioning tables that was used in Exercise 2.7.4.We now establish the validity of this method. Consider a
The Bradley-Terry Model.“Let’s suppose, for a moment, that you have just been married and that you are given a choice of having, during your entire lifetime, either x or y children. Which would
Show that for the binomial model of Section 1, r(θ) =log(1 + eθ) and that ˙r(θ) = p and ¨r(θ) = p(1 − p).
For the gamma model of Section 1, use the definition of θ and r(θ) to show that the mean and variance are as given.
Show that the likelihood equations (9.2.2) have the form Qj (β)/φ = 0, j = 1,...,p.
Show that if f(yi|θ, φ; w) from (9.1.1) is the common density of independent observations yi, i = 1,...,n, then n i=1 yi has a density f(yi|θ∗, φ∗, w∗) for some θ∗, φ∗, and w∗.
Let Y = (y1,...yn). Show that a generalized linear model with canonical link has XY as a sufficient statistic.
Using the definitions of this chapter, find the Pearson statistic (defined in Section 3) for Poisson and binomial regression in terms of the yi’s and mi’s. Show that these are identical to the
Showing 700 - 800
of 1264
1
2
3
4
5
6
7
8
9
10
11
12
13