In the decision tree learner of Figure 7.9 (page 284), it is possible to mix the leaf

Question:

In the decision tree learner of Figure 7.9 (page 284), it is possible to mix the leaf predictions (what is returned by leaf value) and which loss is used in sum loss. For each loss in the set {0–1 loss, absolute loss, squared loss, log loss}, and for each leaf choice in {empirical distribution, mode, median}, build a tree to greedily optimize the loss when choosing the split, and use the leaf choice. For each tree, give the number of leaves and the evaluation of a test set for each loss. Try this for at least two datasets.

(a) Which split choice gives the smallest tree?

(b) Is there a loss that is consistently improved by optimizing a different loss when greedily choosing a split?

(c) Try to find a different leaf choice that would be better for some optimization criterion.

(d) For each optimization criterion, which combination of split choice and leaf choice has the best performance on the test set?

Fantastic news! We've Found the answer you've been seeking!

Step by Step Answer:

Related Book For  book-img-for-question
Question Posted: