12.5 L2-regularization. Let w be the solution of Maxent with a norm-2 squared regularization. (a) Prove the

Question:

12.5 L2-regularization. Let w be the solution of Maxent with a norm-2 squared regularization.

(a) Prove the following inequality: kwk2  2r  (Hint: you could compare the values of the objective function at w and 0.). Generalize this result to other k  kpp -regularizations with p > 1.

(b) Use the previous question to derive an explicit learning guarantee for Maxent with norm-2 squared regularization (Hint: you could use the last inequality given in Section 12.9 and derive an explicit expression for 2).

Fantastic news! We've Found the answer you've been seeking!

Step by Step Answer:

Related Book For  book-img-for-question

Foundations Of Machine Learning

ISBN: 9780262351362

2nd Edition

Authors: Mehryar Mohri, Afshin Rostamizadeh

Question Posted: