Question: Change the stochastic gradient descent algorithm of Figure 17.2 (page 739) that minimizes formula (17.1) so it adjusts the parameters, including regularization, after a batch
Change the stochastic gradient descent algorithm of Figure 17.2
(page 739) that minimizes formula (17.1) so it adjusts the parameters, including regularization, after a batch of examples. How does the complexity of this algorithm differ from the algorithm of Figure 17.2? Which one works better in practice? [Hint: Think about whether you need to regularize all of the parameters or just those used in the batch.]
Step by Step Solution
There are 3 Steps involved in it
Get step-by-step solutions from verified subject matter experts
