Answered step by step
Verified Expert Solution
Question
1 Approved Answer
Linear Model Selection and Regularization You use the glmnet package to perform lasso regression. parsnip does not have a dedicated function to create a ridge
Linear Model Selection and Regularization
You use the glmnet package to perform lasso regression. parsnip does not have a dedicated function to create a ridge regression model specification. You need to use linearreg and set mixture to specify a lasso model. The mixture argument specifies the amount of different types of regularization, mixture specifies only ridge regularization and mixture specifies only lasso regularization. Setting mixture to a value between and lets us use both.
The following procedure will be very similar to what we saw in the ridge regression section. The preprocessing needed is the same, but let us write it out again.
# Run this code from the previous assignment to get you properly started.
librarytidymodels
libraryISLR
Hitters astibbleHitters
filterisnaSalary
Hitterssplit initialsplitHitters strata "Salary"
Hitterstrain trainingHitterssplit
Hitterstest testingHitterssplit
Hittersfold vfoldcvHitterstrain, v
Run the Block of code below
lassorecipe
recipeformula Salary ~ data Hitterstrain
stepnovelallnominalpredictors
stepdummyallnominalpredictors
stepzvallpredictors
stepnormalizeallpredictors
Next, finish the lasso regression workflow. Have the two outputs lassospec and lassoworkflow respectively. For the lassospec output use the functions linearreg, setmode and setengine functions. For the lassoworkflow output use the addrecipe and addmodel outputs.
lassospec
linearregpenalty tune mixture
setmoderegression
setengineglmnet
lassoworkflow workflow
addrecipelassorecipe
addmodellassospec
While you are doing a different kind of regularization you will still use the same penalty argument. I have picked a different range for the values of penalty since I know it will be a good range. You would in practice have to cast a wide net at first and then narrow on the range of interest. Use the output penaltygrid. Use levels and set a range going from Use the function gridregular.
# your code here
#penaltygrid
penaltygrid gridregular
penaltytune levels range c
# your code here
Error in penaltytune levels range c: unused argument levels
Traceback:
gridregularpenaltytune levels range c
librarytestthat
expectequalpenaltygrid$penalty
expectequalpenaltygrid$penalty
expectequalpenaltygrid$penalty
You can tunegrid again. Use the output tuneres along with the function tunegrid. Use autoplot to plot your tuneres outout. Your output should resemble this plot.
# your code here
Next, you should select the best value of penalty using selectbest Your output variable here is bestpenalty. Use rsq as the metric.
# your code here
# bestpenalty
# your code here
You should now refit using the whole training data set. Your output variable should be lassofinal with the function finalizeworkflow and your second output variable should be lassofinalfit with the fit function.
# your code here
Finalize this by calculating the rsq value for the lasso model. You will see tha seee that for this data ridge regression does better than lasso regession. Verify this using the augment then the rsq function. Store the output to the variable rsqval
# your code here
# rsqval augment
# your code here
Step by Step Solution
There are 3 Steps involved in it
Step: 1
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Step: 2
Step: 3
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started