7.8 Simpli ed AdaBoost. Suppose we simplify AdaBoost by setting the parameter t to a xed value...

Question:

7.8 Simpli ed AdaBoost. Suppose we simplify AdaBoost by setting the parameter t to a xed value t = > 0, independent of the boosting round t.

(a) Let be such that ( 1 2 ???? t) 

> 0. Find the best value of as a function of by analyzing the empirical error.

(b) For this value of , does the algorithm assign the same probability mass to correctly classi ed and misclassi ed examples at each round? If not, which set is assigned a higher probability mass?

(c) Using the previous value of , give a bound on the empirical error of the algorithm that depends only on and the number of rounds of boosting T.

(d) Using the previous bound, show that for T > logm 2 2 , the resulting hypothesis is consistent with the sample of size m.

(e) Let s be the VC-dimension of the base learners used. Give a bound on the generalization error of the consistent hypothesis obtained after T = j logm 2 2 k +
1 rounds of boosting. (Hint: Use the fact that the VC-dimension of the family of functions fsgn(
PT t=1 tht) : t 2 Rg is bounded by 2(s+1)T log2(eT )).
Suppose now that varies with m. Based on the bound derived, what can be said if (m) = O(
q logm m )?)

Fantastic news! We've Found the answer you've been seeking!

Step by Step Answer:

Related Book For  book-img-for-question

Foundations Of Machine Learning

ISBN: 9780262351362

2nd Edition

Authors: Mehryar Mohri, Afshin Rostamizadeh

Question Posted: