Answered step by step
Verified Expert Solution
Question
1 Approved Answer
Regularization is added to gradient descent by incurring a cost from the solved weights during each iteration. Before you can implement gradient descent, you need
Regularization is added to gradient descent by incurring a cost from the solved weights during each iteration. Before you can implement gradient descent, you need a way to track your performance, which computes J(). This cost was computed via a function called computeCost.m in hwk2. Copy computeCost.m from hwk2 to a m-file called computeCostReg.m. Update the cost function in computeCostReg.m from: J()=2n1[i=1n(h(,x(i))y(i))2] To: J()=2n1[i=1n(h(,x(i))y(i))2+j=1Dj2] Note we do not penalize our 0 weight. Test your code with the following matlab segment. Do not continue until your answer is: 2.1740e+09. Then, show your computeCostReg.m clear ; close all; \% Load Data data =load(.. hwk2lex1data2.txt'); X=data(:,1:2); y=data(:,3); % Scale features and set them to zero mean with std =1 [ Xnorm mu sigma ]= featureNormalize (X);% reuse this function from hwk2 % Add intercept term to X Xdata =[ ones ( length (X),1) Xnorm ] % Init Theta and lambda theta =((XdataXdata)\Xdata)y; \%well..this is the optimal solution lambda =1; \%Run Compute Cost disp(computeCostReg(Xdata,y,theta, lambda)) Ans (show computeCostReg.m)
Step by Step Solution
There are 3 Steps involved in it
Step: 1
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Step: 2
Step: 3
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started