Answered step by step
Verified Expert Solution
Link Copied!

Question

1 Approved Answer

Set #5 Econ 572 (1) Consider the regression model y = X + u000f, where E(u000f|X) = 0 but instead of the traditional E(u000fu000f0 |X)

Set #5 Econ 572 (1) Consider the regression model y = X + \u000f, where E(\u000f|X) = 0 but instead of the traditional E(\u000f\u000f0 |X) = 2 In , we instead assume E(\u000f\u000f0 |X) = , where is some known n n covaraince matrix. Under this more general assumption, derive the conditional variance-covariance matrix of the OLS estimator. (you can condition on X, as we do in the lecture notes). Hint: Review the notes on Variance-Covariance Matrix Derivation, particularly the second version of that derivation, which exploits Var(Ay|X) = AVar(y|X)A0 . A correct derivation here is quite short. (2) Suppose a vector y is regressed on a matrix of covariates X and the fitted values y are obtained. These fitted values are then regressed on the original X matrix, as in: y = X + . (2a) What will be the value of from this regression (prove this result). Based upon this answer, what is the value of R2 from this regression? (2b) Now, consider running a regression of a dependent variable y on the fitted values y (the model does not contain an intercept parameter just a scalar parameter ): y = y + \u000f. What is the value of from this regression, and what will be the value of R2 ? (For the second part of this question, relate the value of R2 to its value from the initial regression of y on X). 2 (3) Consider the regression model y = X + \u000f and recall the OLS estimator for the variance parameter: n n 1 X 2 1 X 2. = \u000fi = (yi xi ) n k i=1 n k i=1 2 Now, suppose you actually wish to obtain an estimate of the standard deviation rather than the variance 2 . To this end, you use: = 2. That is you just take the square root of the variance parameter estimate. The question is: is this estimator of generally an unbiased estimator? Is it a consistent estimator? To help with part of this question, it is useful to recall Jensen's inequality which states: Suppose f (x) is a convex function of x (meaning its second derivative is positive), Then f (E[x]) E(f [x]). Conversely, if the function f (x) is concave (meaning its second derivative is negative), then f (E[x]) E(f [x]). In addition, review the plim results contained in your video lectures

Step by Step Solution

There are 3 Steps involved in it

Step: 1

blur-text-image

Get Instant Access to Expert-Tailored Solutions

See step-by-step solutions with expert insights and AI powered tools for academic success

Step: 2

blur-text-image

Step: 3

blur-text-image

Ace Your Homework with AI

Get the answers you need in no time with our AI-driven, step-by-step assistance

Get Started

Recommended Textbook for

Introduction to Probability

Authors: Mark Daniel Ward, Ellen Gundlach

1st edition

716771098, 978-1319060893, 1319060897, 978-0716771098

More Books

Students also viewed these Mathematics questions

Question

6 y 3 y +21 4 y+ 28 3 y8

Answered: 1 week ago