Try to construct an artificial example where a naive Bayes classifier can give divide-by-zero error in test

Question:

Try to construct an artificial example where a naive Bayes classifier can give divide-by-zero error in test cases when using empirical frequencies as probabilities. Specify the network and the (non-empty) training examples. [Hint:

You can do it with two features, say A and B, and a binary classification, say C, that has domain {0, 1}. Construct a dataset where the empirical probabilities give P(a|C = 0) = 0 and P(b|C = 1) = 0.] What observation is inconsistent with the model?

Fantastic news! We've Found the answer you've been seeking!

Step by Step Answer:

Related Book For  book-img-for-question
Question Posted: