7.4 Weighted instances. Let the training sample be S = ((x1; y1); : : : ; (xm;...
Question:
7.4 Weighted instances. Let the training sample be S = ((x1; y1); : : : ; (xm; ym)).
Suppose we wish to penalize dierently errors made on xi versus xj . To do that, we associate some non-negative importance weight wi to each point xi and dene the objective function F() = Pm i=1 wie????yif(xi), where f = PT t=1 tht. Show that this function is convex and dierentiable and use it to derive a boostingtype algorithm.
Fantastic news! We've Found the answer you've been seeking!
Step by Step Answer:
Related Book For
Foundations Of Machine Learning
ISBN: 9780262351362
2nd Edition
Authors: Mehryar Mohri, Afshin Rostamizadeh
Question Posted: