Consider a single Boolean random variable Y (the classification). Let the prior probability P(Y =true) be .

Question:

Consider a single Boolean random variable Y (the “classification”). Let the prior probability P(Y =true) be π. Let’s try to find π, given a training set D=(y1, . . . , yN) with N independent samples of Y . Furthermore, suppose p of the N are positive and n of the N are negative.

a. Write down an expression for the likelihood of D (i.e., the probability of seeing this particular sequence of examples, given a fixed value of π) in terms of π, p, and n.

b. By differentiating the log likelihood L, find the value of π that maximizes the likelihood.

c. Now suppose we add in k Boolean random variables X1,X2, . . . ,Xk (the “attributes”)
that describe each sample, and suppose we assume that the attributes are conditionally independent of each other given the goal Y . Draw the Bayes net corresponding to this assumption.

d. Write down the likelihood for the data including the attributes, using the following additional notation:

image text in transcribed

Step by Step Answer:

Related Book For  book-img-for-question
Question Posted: