Answered step by step
Verified Expert Solution
Link Copied!

Question

1 Approved Answer

** THE AVAILABLE SOLUTION on SITE DOES NOT WORK PROPERLY. PLEASE PROVIDE A FULL SOLUTION. Part Three: Compute Gradient [Graded] Now, you will implement the

** THE AVAILABLE SOLUTION on SITE DOES NOT WORK PROPERLY. PLEASE PROVIDE A FULL SOLUTION.

Part Three: Compute Gradient [Graded]

Now, you will implement the function grad that computes the gradients of the loss function with respect to the parameters, similar to what you did in the Linear SVM project. grad outputs the gradient with respect to (beta_grad) and (bgrad). Unlike loss, grad is only called during the training phase; consequently, the input parameters don't include xTe, yTe. Remember that the squared hinge loss is calculated with when training, and so you would just need to call computeK on xTr, xTr here.

The gradients are given by:

=2+=12max[1([:,]+),0]([:,])11([:,]+)>0==12max[1([:,]+),0]()11([:,]+)>0=2+=12max[1([:,]+),0]([:,])11([:,]+)>0==12max[1([:,]+),0]()11([:,]+)>0

where the indicator function is:

11([:,]+)>0={10if 1([:,]+)>0otherwise

def grad(beta, b, xTr, yTr, C, kerneltype, kpar=1): """ Calculates the gradients of the loss function with respect to beta and b. Input: beta : n-dimensional vector that stores the linear combination coefficients b : bias term, a scalar xTr : nxd dimensional data matrix (training set, each row is an input vector) yTr : n-dimensional vector (training labels, each entry is a label) C : scalar (constant that controls the tradeoff between l2-regularizer and hinge-loss) kerneltype: either of ['linear', 'polynomial', 'rbf'] kpar : kernel parameter (inverse sigma^2 in case of 'rbf', degree p in case of 'polynomial') Output: beta_grad, bgrad beta_grad : n-dimensional vector (the gradient of loss with respect to the beta) bgrad : scalar (the gradient of loss with respect to the bias, b) """ n, d = xTr.shape beta_grad = np.zeros(n) bgrad = np.zeros(1) # compute the kernel values between xTr and xTr kernel_train = computeK(kerneltype, xTr, xTr, kpar) # YOUR CODE HERE

** THE AVAILABLE SOLUTION on SITE DOES NOT WORK PROPERLY. PLEASE PROVIDE A FULL SOLUTION.

** TEST CASES BELOW:

# These tests test whether your grad() is implemented correctly

xTr_test, yTr_test = generate_data() n, d = xTr_test.shape # Checks whether grad returns a tuple def grad_test1(): beta = np.random.rand(n) b = np.random.rand(1) out = grad(beta, b, xTr_test, yTr_test, 10, 'rbf') return len(out) == 2

# Checks the dimension of gradients def grad_test2(): beta = np.random.rand(n) b = np.random.rand(1) beta_grad, bgrad = grad(beta, b, xTr_test, yTr_test, 10, 'rbf') return len(beta_grad) == n and np.isscalar(bgrad)

# Checks the gradient of the l2 regularizer def grad_test3(): beta = np.random.rand(n) b = np.random.rand(1) beta_grad, bgrad = grad(beta, b, xTr_test, yTr_test, 0, 'rbf') beta_grad_grader, bgrad_grader = grad_grader(beta, b, xTr_test, yTr_test, 0, 'rbf') return (np.linalg.norm(beta_grad - beta_grad_grader) < 1e-5) and \ (np.linalg.norm(bgrad - bgrad_grader) < 1e-5)

# Checks the gradient of the square hinge loss def grad_test4(): beta = np.zeros(n) b = np.random.rand(1) beta_grad, bgrad = grad(beta, b, xTr_test, yTr_test, 1, 'rbf') beta_grad_grader, bgrad_grader = grad_grader(beta, b, xTr_test, yTr_test, 1, 'rbf') return (np.linalg.norm(beta_grad - beta_grad_grader) < 1e-5) and \ (np.linalg.norm(bgrad - bgrad_grader) < 1e-5)

# Checks the gradient of the loss def grad_test5(): beta = np.random.rand(n) b = np.random.rand(1) beta_grad, bgrad = grad(beta, b, xTr_test, yTr_test, 10, 'rbf') beta_grad_grader, bgrad_grader = grad_grader(beta, b, xTr_test, yTr_test, 10, 'rbf') return (np.linalg.norm(beta_grad - beta_grad_grader) < 1e-5) and \ (np.linalg.norm(bgrad - bgrad_grader) < 1e-5)

runtest(grad_test1, 'grad_test1') runtest(grad_test2, 'grad_test2') runtest(grad_test3, 'grad_test3') runtest(grad_test4, 'grad_test4') runtest(grad_test5, 'grad_test5')

Step by Step Solution

There are 3 Steps involved in it

Step: 1

blur-text-image

Get Instant Access to Expert-Tailored Solutions

See step-by-step solutions with expert insights and AI powered tools for academic success

Step: 2

blur-text-image

Step: 3

blur-text-image

Ace Your Homework with AI

Get the answers you need in no time with our AI-driven, step-by-step assistance

Get Started

Recommended Textbook for

Introductory Relational Database Design For Business With Microsoft Access

Authors: Jonathan Eckstein, Bonnie R. Schultz

1st Edition

1119329418, 978-1119329411

More Books

Students also viewed these Databases questions

Question

What are the key differences between SOFR and LIBOR?

Answered: 1 week ago

Question

What are some of the possible scenes from our future?

Answered: 1 week ago