You have a sample of size (n=1) with data (y_{1}=2) and (x_{1}=1). You are interested in the
Question:
You have a sample of size \(n=1\) with data \(y_{1}=2\) and \(x_{1}=1\). You are interested in the value of \(\beta\) in the regression \(Y=X \beta+u\). (Note there is no intercept.)
a. Plot the sum of squared residuals \(\left(y_{1}-b x_{1}\right)^{2}\) as function of \(b\).
b. Show that the least squares estimate of \(\beta\) is \(\hat{\beta}^{O L S}=2\).
c. Using \(\lambda_{\text {Lasso }}=1\), plot the Lasso penalty term \(\lambda_{\text {Lasso }}|b|\) as a function of \(b\).
d. Using \(\lambda_{\text {Lasso }}=1\), plot the Lasso penalized sum of squared residuals \(\left(y_{1}-b x_{1}\right)^{2}+\lambda_{\text {Lasso }}|b|\).
e. Find the value of \(\hat{\beta}^{\text {Lasso }}\).
f. Using \(\lambda_{\text {Lasso }}=0.5\), repeat (c) and (d). Find the value of \(\hat{\beta}^{\text {Lasso }}\).
g. Using \(\lambda_{\text {Lasso }}=5\), repeat (c) and (d). Find the value of \(\hat{\beta}^{\text {Lasso }}\).
h. Use the graphs that you produced in (a)-(d) for the various values of \(\lambda_{\text {Lasso }}\) to explain why a larger value of \(\lambda_{\text {Lasso }}\) results in more shrinkage of the OLS estimate.
Step by Step Answer: