Answered step by step
Verified Expert Solution
Link Copied!

Question

1 Approved Answer

In chapter 6, you have learned Bayes theorem, MAP, ML hypotheses, Bayes optimal classifier, and Bayesian belief networks. In this in-class activity, you will solve

In chapter 6, you have learned Bayes theorem, MAP, ML hypotheses, Bayes optimal classifier, and Bayesian belief networks. In this in-class activity, you will solve the last exercise of chapter 6. Let us apply the naive Bayes classifier to a concept learning problem we considered during our discussion of decision tree learning: classifying days according to whether someone will play tennis. Table 3.2 from Chapter 3 provides a set of 14 training examples of the target concept PlayTennis, where each day is described by the attributes Outlook, Temperature, Humidity, and Wind. Here we use the naive Bayes classifier and the training data from this table to classify the following novel instance: (Outlook = sunny, Temperature = cool, Humidity = high, Wind = strong) Our task is to predict the target value (yes or no) of the target concept PlayTennis for this new instance. Instantiating Equation (6.20) to fit the current task, the target value VNB is given by = argrnax P(vj) P(0utlook = sunny)v,)P(Temperature = coolIvj) vj~(yes,no] Notice in the final expression that ai has been instantiated using the particular attribute values of the new instance. To calculate VNB we now require 10 probabilities that can be estimated from the training data. First, the probabilities of the different target values can easily be estimated based on their frequencies over the 14 training examples P(P1ayTennis = yes) = 9/14 = .64 P(P1ayTennis = no) = 5/14 = .36 CHAETER 6 BAYESIAN LEARNING 179 Similarly, we can estimate the conditional probabilities. For example, those for Wind = strong are P(Wind = stronglPlayTennis = yes) = 319 = .33 P(Wind = strongl PlayTennis = no) = 315 = .60 Using these probability estimates and similar estimates for the remaining attribute values, we calculate VNB according to Equation (6.21) as follows (now omitting attribute names for brevity) Thus, the naive Bayes classifier assigns the target value PlayTennis = no to this new instance, based on the probability estimates learned from the training data. Furthermore, by normalizing the above quantities to sum to one we can calculate the conditional probability that the target value is no, given the observed attribute values. For the current example, this probability is ,02$y=m -7,9,5 . 6.6. Draw the Bayesian belief network that represents the conditional independence assumptions of the naive Bayes classifier for the PlayTennis problem of Section 6.9.1. Give the conditional probability table associated with the node Wind.

Step by Step Solution

There are 3 Steps involved in it

Step: 1

blur-text-image

Get Instant Access to Expert-Tailored Solutions

See step-by-step solutions with expert insights and AI powered tools for academic success

Step: 2

blur-text-image

Step: 3

blur-text-image

Ace Your Homework with AI

Get the answers you need in no time with our AI-driven, step-by-step assistance

Get Started

Recommended Textbook for

DB2 11 The Database For Big Data And Analytics

Authors: Cristian Molaro, Surekha Parekh, Terry Purcell, Julian Stuhler

1st Edition

1583473858, 978-1583473856

More Books

Students also viewed these Databases questions

Question

What would the direct or close-ended question be useful for?

Answered: 1 week ago