Change the stochastic gradient descent algorithm of Figure 17.2 (page 739) that minimizes formula (17.1) so it

Question:

Change the stochastic gradient descent algorithm of Figure 17.2

(page 739) that minimizes formula (17.1) so it adjusts the parameters, including regularization, after a batch of examples. How does the complexity of this algorithm differ from the algorithm of Figure 17.2? Which one works better in practice? [Hint: Think about whether you need to regularize all of the parameters or just those used in the batch.]

Fantastic news! We've Found the answer you've been seeking!

Step by Step Answer:

Related Book For  book-img-for-question
Question Posted: