Change the stochastic gradient descent algorithm of Figure 17.2 (page 739) that minimizes formula (17.1) so it
Question:
Change the stochastic gradient descent algorithm of Figure 17.2
(page 739) that minimizes formula (17.1) so it adjusts the parameters, including regularization, after a batch of examples. How does the complexity of this algorithm differ from the algorithm of Figure 17.2? Which one works better in practice? [Hint: Think about whether you need to regularize all of the parameters or just those used in the batch.]
Fantastic news! We've Found the answer you've been seeking!
Step by Step Answer:
Related Book For
Artificial Intelligence: Foundations Of Computational Agents
ISBN: 9781009258197
3rd Edition
Authors: David L. Poole , Alan K. Mackworth
Question Posted: