7.1 ( ) www Suppose we have a data set of input vectors {xn} with corresponding target...

Question:

7.1 ( ) www Suppose we have a data set of input vectors {xn} with corresponding target values tn ∈ {−1, 1}, and suppose that we model the density of input vectors within each class separately using a Parzen kernel density estimator (see Section 2.5.1) with a kernel k(x, x). Write down the minimum misclassification-rate decision rule assuming the two classes have equal prior probability. Show also that, if the kernel is chosen to be k(x, x) = xTx, then the classification rule reduces to simply assigning a new input vector to the class having the closest mean. Finally, show that, if the kernel takes the form k(x, x) = φ(x)Tφ(x), that the classification is based on the closest mean in the feature space φ(x).

Fantastic news! We've Found the answer you've been seeking!

Step by Step Answer:

Related Book For  book-img-for-question
Question Posted: