Answered step by step
Verified Expert Solution
Link Copied!

Question

1 Approved Answer

Problem 2: Decision Trees for Spam Classification (30 points) We'll use the same data as in our earlier homework: In order to reduce my email

image text in transcribed

Problem 2: Decision Trees for Spam Classification (30 points) We'll use the same data as in our earlier homework: In order to reduce my email load, I decide to implement a machine learning algorithm to decide whether or not I should read an email, or simply file it away instead. To train my model, I obtain the following data set of binary-valued features about each email, including whether I know the author or not, whether the email is long or short, and whether it has any of several key words, along with my final decision about whether to read it(y = +1 for "read", y =-1 for "discard") know author? is long? has 'research, has 'grade, has lottery' read? 0 0 0 0 0 0 In the case of any ties where both classes have equal probability, we will prefer to predict class +1 1. Calculate the entropy H(y) of the binary class variable y. Hint: Your answer should be a number between 0 2. Calculate the information gain for each feature x. Which feature should I split on for the root node of the 3. Determine the complete decision tree that will be learned from these data. (The tree should perfectly classify and 1. (5 points) decision tree? (10 points) all training data.) Specify the tree by drawing it, or with a set of nested if-then-else statements. (15 points) Problem 2: Decision Trees for Spam Classification (30 points) We'll use the same data as in our earlier homework: In order to reduce my email load, I decide to implement a machine learning algorithm to decide whether or not I should read an email, or simply file it away instead. To train my model, I obtain the following data set of binary-valued features about each email, including whether I know the author or not, whether the email is long or short, and whether it has any of several key words, along with my final decision about whether to read it(y = +1 for "read", y =-1 for "discard") know author? is long? has 'research, has 'grade, has lottery' read? 0 0 0 0 0 0 In the case of any ties where both classes have equal probability, we will prefer to predict class +1 1. Calculate the entropy H(y) of the binary class variable y. Hint: Your answer should be a number between 0 2. Calculate the information gain for each feature x. Which feature should I split on for the root node of the 3. Determine the complete decision tree that will be learned from these data. (The tree should perfectly classify and 1. (5 points) decision tree? (10 points) all training data.) Specify the tree by drawing it, or with a set of nested if-then-else statements. (15 points)

Step by Step Solution

There are 3 Steps involved in it

Step: 1

blur-text-image

Get Instant Access to Expert-Tailored Solutions

See step-by-step solutions with expert insights and AI powered tools for academic success

Step: 2

blur-text-image

Step: 3

blur-text-image

Ace Your Homework with AI

Get the answers you need in no time with our AI-driven, step-by-step assistance

Get Started

Recommended Textbook for

Database Basics Computer EngineeringInformation Warehouse Basics From Science

Authors: Odiljon Jakbarov ,Anvarkhan Majidov

1st Edition

620675183X, 978-6206751830

More Books

Students also viewed these Databases questions