Question
Let cons denote consumption and inc denote income. Which of the following model meets assumption MLR.3? a. cons = ?0 + ?1inc + ?2(inc/1,000) +
Let cons denote consumption and inc denote income. Which of the following model meets assumption MLR.3?
a.
cons = ?0 + ?1inc + ?2(inc/1,000) + u
b.
log(cons) = ?0 + ?1log(inc) + ?2log(inc2) + u
c.
log(cons) = ?0 + ?1log(inc) + ?2[log(inc)]2 + u
d.
All of the above.
2.
Consider the following multiple linear regression (MLR) model, y = ?0 + ?1x1 + ...+ ?kxk + u. Which of the following statements is correct?
a.
MLR.4 can fail if the functional relationship between the explained and explanatory variables is misspecified.
b.
MLR.4 can fail if an important explanatory factor, that is correlated with x1, x2,... ,xk,is omitted.
c.
MLR.4 can fail due to measurement error in an explanatory variable or when one or more explanatory variables is determined jointly with y.
d.
All of the above.
3.
Consider the multiple linear regression model, y = ?0 + ?1x1 + ...+ ?kxk + u (1). Which of the following statements is correct?
a.
Under Assumptions MLR.1 through MLR.4, E left parenthesis beta with hat on top subscript j right parenthesis equals beta subscript jfor any value of ?j, j = 1, ..., k: beta with hat on top subscript jis an unbiased estimator for ?j.
b.
Unbiasedness is a feature of the sampling distributions of beta with hat on top subscript j 's, which says nothing about the estimates obtained from a given sample. It is always possible that one random sample, used to estimate (1), gives point estimates far from the true population parameters ?j's.
c.
Including one or more irrelevant variables in a multiple regression model, or overspecifying the model, does not affect the unbiasedness of the OLS estimators, but it can have undesirable effects of the variances of the OLS estimators.
d.
All of the above.
4.
Under Assumptions MLR.1 through MLR.5, conditional on the sample values of the independent variables, v a r left parenthesis beta with hat on top subscript j right parenthesis equals bevelled fraction numerator sigma squared over denominator S S T subscript j left parenthesis 1 minus R subscript j superscript 2 right parenthesis end fraction, for j = 1, 2, ..., k, where S S T subscript j equals sum from blank to blank of left parenthesis x subscript i j end subscript minus x with bar on top subscript j right parenthesis squaredis the total sample variation in xjand Rj2 is the R-squared from regressingon all other independent variables (and including an intercept). Which of the following statements is correct?
a.
The larger the error variance, , the larger are the sampling variances for OLS estimators. ?2 is a feature of the population, it has nothing to do with the sample size. Adding more explanatory variables to our model, when possible, reduces ?2.
b.
The larger the total variation in xj, the smaller is v a r left parenthesis beta with hat on top subscript j right parenthesis . Increasing the sample size increases SSTj.
c.
As Rj2increases to one, v a r left parenthesis beta with hat on top subscript j right parenthesisgets larger and larger. In other words, a high degree of correlation between two or more independent variables (multicollinearity) can lead to larger variances for the OLS slope estimators.
d.
All of the above.
5.
Consider the following true population model, which satisfies the Gauss-Markov assumptions: y = ?0 + ?1x1 + ?2x2 + u. When we regress y on x1, we obtain the following simple regression line: y with tilde on top equals beta with tilde on top subscript 0 plus beta with tilde on top subscript 1 x subscript 1(1). When we regress y on x1and x2, we obtain the following multiple regression line: y with hat on top equals beta with hat on top subscript 0 plus beta with hat on top subscript 1 x subscript 1 plus beta with hat on top subscript 2 x subscript 2(2). When ?2 ? 0, equation (1) excludes a relevant variable from the model and this induces a bias in beta with tilde on top subscript 1 , unless x1 and x2 are uncorrelated. On the other hand, when ?2 ? 0, beta with hat on top subscript 1 , in equation (2), is unbiased. When ?2 = 0, both beta with tilde on top subscript 1and beta with hat on top subscript 1 are unbiased. Conditioning on the values of x1 and x2 in the sample, we have, v a r left parenthesis beta with hat on top subscript 1 right parenthesis equals bevelled fraction numerator sigma squared over denominator left square bracket S S T subscript 1 left parenthesis 1 minus R subscript 1 superscript 2 right parenthesis right square bracket end fraction(3), where SST1 is the total variation in x1, and R12 is the R-squared from the regression of x1 on x2.For the simple regression model, v a r left parenthesis beta with tilde on top subscript 1 right parenthesis equals bevelled fraction numerator sigma squared over denominator S S T subscript 1 end fraction(4). Which following statements is correct?
a.
If bias is used as the only criterion, beta with hat on top subscript 1is preferred to beta with tilde on top subscript 1 .
b.
Comparing (3) and (4) shows that v a r left parenthesis beta with tilde on top subscript 1 right parenthesisis always smaller than v a r left parenthesis beta with hat on top subscript 1 right parenthesis , unless x1 and x2 are uncorrelated in the sample, in which case the two estimators beta with tilde on top subscript 1and beta with hat on top subscript 1are the same.
c.
If ?2 = 0, then beta with tilde on top subscript 1is preferred to beta with hat on top subscript 1 , as over specification exacerbates the multicollinearity problem and leads to a less efficient estimator of ?1. In other words, a higher variance for the estimator ?1 is the cost of including an irrelevant variable in a model.
d.
All of the above.
6.
Consider the following multiple regression model: y = ?0 + ?1x1 + ...+ ?kxk + u. Let ?2 be the variance of the error term u, conditional on the explanatory variables. The unbiased estimator of ?2 is: sigma with hat on top squared equals bevelled fraction numerator left parenthesis begin display style sum from blank to blank of end style u with hat on top subscript i superscript 2 right parenthesis over denominator left parenthesis n minus k minus 1 right parenthesis end fraction equals bevelled fraction numerator S S R over denominator left parenthesis n minus k minus 1 right parenthesis end fraction . Which of the following statements is correct?
a.
The first order conditions for the OLS estimators impose k + 1 restrictions on the n OLS residuals: This means, there are only n - (k + 1) degrees of freedom in the residuals, where n is the number of observations and k the number of independent variables.
b.
sigma with hat on top denotes the standard error of the regression; it can either decrease or increase when another independent variable is added to a regression.
c.
s e left parenthesis beta with hat on top subscript j right parenthesisdenotes the standard error of beta with hat on top subscript j. It is equal to: s e left parenthesis beta with hat on top subscript j right parenthesis equals bevelled fraction numerator sigma with hat on top over denominator left square bracket S S T subscript j left parenthesis 1 minus R subscript j superscript 2 right parenthesis right square bracket to the power of 1 divided by 2 end exponent end fraction , which is not a valid estimator if the errors exhibit heteroskedasticity. It is also equal to: s e left parenthesis beta with hat on top subscript j right parenthesis equals bevelled fraction numerator sigma with hat on top over denominator left square bracket n to the power of 1 divided by 2 end exponent s d left parenthesis x subscript j right parenthesis left parenthesis 1 minus R subscript j superscript 2 right parenthesis to the power of 1 divided by 2 end exponent right square bracket end fraction , where sd(xj) is the sample standard deviation; note, the precision of beta with hat on top subscript jincreases as n increases.
d.
All of the above.
7.
Consider the following multiple regression model: y = ?0 + ?1x1 + ...+ ?kxk + u. Which of the following statements is correct?
a.
Under Assumptions MLR.1 through MLR.4, OLS is unbiased.
b.
Under Assumptions MLR.1 through MLR.5, OLS estimators beta with hat on top subscript j 's are the best linear unbiased estimator (BLUE) of ?j's: this is the Gauss-Markov Theorem.
c.
Assumptions MLR.1 through MLR.5 are known as the Gauss-Markov assumptions (for cross-sectional analysis).
d.
All of the above.
Step by Step Solution
There are 3 Steps involved in it
Step: 1
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Step: 2
Step: 3
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started