In the AdaBoost Algorithm 9.10, assume we have learned a base model fmx at step m that
Question:
In the AdaBoost Algorithm 9.10, assume we have learned a base model fm¹xº at step m that performs worse than random guessing (i.e., its error m > 1 2 ). If we simply flip it to ¯ fm¹xº = ???? fm¹xº, compute the error for ¯ fm¹xº and its optimal ensemble weight. Show that it is equivalent to use either fm¹xº or ¯ fm¹xº in AdaBoost.
Fantastic news! We've Found the answer you've been seeking!
Step by Step Answer:
Related Book For
Machine Learning Fundamentals A Concise Introduction
ISBN: 9781108940023
1st Edition
Authors: Hui Jiang
Question Posted: