Answered step by step
Verified Expert Solution
Link Copied!

Question

1 Approved Answer

Exercise 2 E [4 points]. Given a training set D = {(x(i), y(i)), i = 1, .., M}, where x() RN and y() {1,2,

Exercise 2 E [4 points]. Given a training set D = {(x(i), y(i)), i = 1, ..., M}, where x()  RN and y()  {1,2,

Exercise 2 E [4 points]. Given a training set D = {(x(i), y(i)), i = 1, .., M}, where x() RN and y() {1,2, ..., C}, derive the maximum likelihood estimates of the naive Bayes for real valued xmodeled with a Laplacian distribution, i.e., (-105 = Hsled). p(x, y = c) = 1 20j|c exp Exercise 3 [4 points]. Prove that in binary classification, the posterior of linear discrim- inant analysis, i.e., p(y = 1|x; y, , ), is in the form of a sigmoid function p(y = 1|x; 0) = 1 1+e-0x where is a function of {y, , }. Hint: remember to use the convention of letting to 1 that incorporates the bias term into the parameter vector 0. = 2

Step by Step Solution

3.44 Rating (163 Votes )

There are 3 Steps involved in it

Step: 1

Exercise 2 To derive the maximum likelihood estimates of the naive Bayes for realvalued data modeled with a Laplacian distribution we need to estimate ... blur-text-image

Get Instant Access to Expert-Tailored Solutions

See step-by-step solutions with expert insights and AI powered tools for academic success

Step: 2

blur-text-image

Step: 3

blur-text-image

Ace Your Homework with AI

Get the answers you need in no time with our AI-driven, step-by-step assistance

Get Started

Recommended Textbook for

Income Tax Fundamentals 2013

Authors: Gerald E. Whittenburg, Martha Altus Buller, Steven L Gill

31st Edition

1111972516, 978-1285586618, 1285586611, 978-1285613109, 978-1111972516

More Books

Students also viewed these Programming questions