Answered step by step
Verified Expert Solution
Link Copied!

Question

1 Approved Answer

Linear SVM Part One: Loss Function [Graded] You will need to implement the function loss, which takes in training data xTr ( ) and labels

Linear SVM

Part One: Loss Function [Graded]

You will need to implement the function loss, which takes in training data xTr (

) and labels yTr () with yTr[i]{1,1} and evaluates the squared hinge loss of classifier (,)

.

Some functions that might be useful for you:

  • np.maximum(a,b): returns the maximum value between a and b
  • arr.clip(min=0): returns arr but with value 0 replacing negative entries
  • arr.shape: returns the tuple (m,n) where m is the row count, n is the column count
def loss(w, b, xTr, yTr, C):
 """
 INPUT:
 w : d dimensional weight vector
 b : scalar (bias)
 xTr : nxd dimensional matrix (each row is an input vector)
 yTr : n dimensional vector (each entry is a label)
 C : scalar (constant that controls the tradeoff between l2-regularizer and hinge-loss)
 
 OUTPUTS:
 loss : the total loss obtained with (w, b) on xTr and yTr (scalar)
 """
 
 loss_val = 0.0
 
 # YOUR CODE HERE
 raise NotImplementedError()
 
 return loss_val

Part One: Loss Function [Graded]

You will need to implement the function loss, which takes in training data xTr (

) and labels yTr () with yTr[i]{1,1} and evaluates the squared hinge loss of classifier (,)

.

Some functions that might be useful for you:

  • np.maximum(a,b): returns the maximum value between a and b
  • arr.clip(min=0): returns arr but with value 0 replacing negative entries
  • arr.shape: returns the tuple (m,n) where m is the row count, n is the column count
def loss(w, b, xTr, yTr, C):
 """
 INPUT:
 w : d dimensional weight vector
 b : scalar (bias)
 xTr : nxd dimensional matrix (each row is an input vector)
 yTr : n dimensional vector (each entry is a label)
 C : scalar (constant that controls the tradeoff between l2-regularizer and hinge-loss)
 
 OUTPUTS:
 loss : the total loss obtained with (w, b) on xTr and yTr (scalar)
 """
 
 loss_val = 0.0
 
 # YOUR CODE HERE
 raise NotImplementedError()
 
 return loss_val

Part One: Loss Function [Graded]

You will need to implement the function loss, which takes in training data xTr (

) and labels yTr () with yTr[i]{1,1} and evaluates the squared hinge loss of classifier (,)

.

Some functions that might be useful for you:

  • np.maximum(a,b): returns the maximum value between a and b
  • arr.clip(min=0): returns arr but with value 0 replacing negative entries
  • arr.shape: returns the tuple (m,n) where m is the row count, n is the column count
def loss(w, b, xTr, yTr, C):
 """
 INPUT:
 w : d dimensional weight vector
 b : scalar (bias)
 xTr : nxd dimensional matrix (each row is an input vector)
 yTr : n dimensional vector (each entry is a label)
 C : scalar (constant that controls the tradeoff between l2-regularizer and hinge-loss)
 
 OUTPUTS:
 loss : the total loss obtained with (w, b) on xTr and yTr (scalar)
 """
 
 loss_val = 0.0
 
 # YOUR CODE HERE
 raise NotImplementedError()
 
 return loss_val

Part One: Loss Function [Graded]

You will need to implement the function loss, which takes in training data xTr (

) and labels yTr () with yTr[i]{1,1} and evaluates the squared hinge loss of classifier (,)

.

Some functions that might be useful for you:

  • np.maximum(a,b): returns the maximum value between a and b
  • arr.clip(min=0): returns arr but with value 0 replacing negative entries
  • arr.shape: returns the tuple (m,n) where m is the row count, n is the column count
def loss(w, b, xTr, yTr, C):
 """
 INPUT:
 w : d dimensional weight vector
 b : scalar (bias)
 xTr : nxd dimensional matrix (each row is an input vector)
 yTr : n dimensional vector (each entry is a label)
 C : scalar (constant that controls the tradeoff between l2-regularizer and hinge-loss)
 
 OUTPUTS:
 loss : the total loss obtained with (w, b) on xTr and yTr (scalar)
 """
 
 loss_val = 0.0
 
 # YOUR CODE HERE
 raise NotImplementedError()
 
 return loss_val

Part One: Loss Function [Graded]

You will need to implement the function loss, which takes in training data xTr (

) and labels yTr () with yTr[i]{1,1} and evaluates the squared hinge loss of classifier (,)

.

Some functions that might be useful for you:

  • np.maximum(a,b): returns the maximum value between a and b
  • arr.clip(min=0): returns arr but with value 0 replacing negative entries
  • arr.shape: returns the tuple (m,n) where m is the row count, n is the column count
def loss(w, b, xTr, yTr, C):
 """
 INPUT:
 w : d dimensional weight vector
 b : scalar (bias)
 xTr : nxd dimensional matrix (each row is an input vector)
 yTr : n dimensional vector (each entry is a label)
 C : scalar (constant that controls the tradeoff between l2-regularizer and hinge-loss)
 
 OUTPUTS:
 loss : the total loss obtained with (w, b) on xTr and yTr (scalar)
 """
 
 loss_val = 0.0
 
 # YOUR CODE HERE
 raise NotImplementedError()
 
 return loss_val

Part One: Loss Function [Graded]

You will need to implement the function loss, which takes in training data xTr (

) and labels yTr () with yTr[i]{1,1} and evaluates the squared hinge loss of classifier (,)

.

Some functions that might be useful for you:

  • np.maximum(a,b): returns the maximum value between a and b
  • arr.clip(min=0): returns arr but with value 0 replacing negative entries
  • arr.shape: returns the tuple (m,n) where m is the row count, n is the column count
def loss(w, b, xTr, yTr, C):
 """
 INPUT:
 w : d dimensional weight vector
 b : scalar (bias)
 xTr : nxd dimensional matrix (each row is an input vector)
 yTr : n dimensional vector (each entry is a label)
 C : scalar (constant that controls the tradeoff between l2-regularizer and hinge-loss)
 
 OUTPUTS:
 loss : the total loss obtained with (w, b) on xTr and yTr (scalar)
 """
 
 loss_val = 0.0
 
 # YOUR CODE HERE
 raise NotImplementedError()
 
 return loss_val

Part One: Loss Function [Graded]

You will need to implement the function loss, which takes in training data xTr (

) and labels yTr () with yTr[i]{1,1} and evaluates the squared hinge loss of classifier (,)

.

Some functions that might be useful for you:

  • np.maximum(a,b): returns the maximum value between a and b
  • arr.clip(min=0): returns arr but with value 0 replacing negative entries
  • arr.shape: returns the tuple (m,n) where m is the row count, n is the column count
def loss(w, b, xTr, yTr, C):
 """
 INPUT:
 w : d dimensional weight vector
 b : scalar (bias)
 xTr : nxd dimensional matrix (each row is an input vector)
 yTr : n dimensional vector (each entry is a label)
 C : scalar (constant that controls the tradeoff between l2-regularizer and hinge-loss)
 
 OUTPUTS:
 loss : the total loss obtained with (w, b) on xTr and yTr (scalar)
 """
 
 loss_val = 0.0
 
 # YOUR CODE HERE
 raise NotImplementedError()
 
 return loss_val

Part One: Loss Function [Graded]

You will need to implement the function loss, which takes in training data xTr (

) and labels yTr () with yTr[i]{1,1} and evaluates the squared hinge loss of classifier (,)

.

Some functions that might be useful for you:

  • np.maximum(a,b): returns the maximum value between a and b
  • arr.clip(min=0): returns arr but with value 0 replacing negative entries
  • arr.shape: returns the tuple (m,n) where m is the row count, n is the column count
def loss(w, b, xTr, yTr, C):
 """
 INPUT:
 w : d dimensional weight vector
 b : scalar (bias)
 xTr : nxd dimensional matrix (each row is an input vector)
 yTr : n dimensional vector (each entry is a label)
 C : scalar (constant that controls the tradeoff between l2-regularizer and hinge-loss)
 
 OUTPUTS:
 loss : the total loss obtained with (w, b) on xTr and yTr (scalar)
 """
 
 loss_val = 0.0
 
 # YOUR CODE HERE
 raise NotImplementedError()
 
 return loss_val

Part One: Loss Function [Graded]

You will need to implement the function loss, which takes in training data xTr (

) and labels yTr () with yTr[i]{1,1} and evaluates the squared hinge loss of classifier (,)

.

Some functions that might be useful for you:

  • np.maximum(a,b): returns the maximum value between a and b
  • arr.clip(min=0): returns arr but with value 0 replacing negative entries
  • arr.shape: returns the tuple (m,n) where m is the row count, n is the column count
def loss(w, b, xTr, yTr, C):
 """
 INPUT:
 w : d dimensional weight vector
 b : scalar (bias)
 xTr : nxd dimensional matrix (each row is an input vector)
 yTr : n dimensional vector (each entry is a label)
 C : scalar (constant that controls the tradeoff between l2-regularizer and hinge-loss)
 
 OUTPUTS:
 loss : the total loss obtained with (w, b) on xTr and yTr (scalar)
 """
 
 loss_val = 0.0
 
 # YOUR CODE HERE
 raise NotImplementedError()
 
 return loss_val

Part One: Loss Function [Graded]

You will need to implement the function loss, which takes in training data xTr (

) and labels yTr () with yTr[i]{1,1} and evaluates the squared hinge loss of classifier (,)

.

Some functions that might be useful for you:

  • np.maximum(a,b): returns the maximum value between a and b
  • arr.clip(min=0): returns arr but with value 0 replacing negative entries
  • arr.shape: returns the tuple (m,n) where m is the row count, n is the column count
def loss(w, b, xTr, yTr, C):
 """
 INPUT:
 w : d dimensional weight vector
 b : scalar (bias)
 xTr : nxd dimensional matrix (each row is an input vector)
 yTr : n dimensional vector (each entry is a label)
 C : scalar (constant that controls the tradeoff between l2-regularizer and hinge-loss)
 
 OUTPUTS:
 loss : the total loss obtained with (w, b) on xTr and yTr (scalar)
 """
 
 loss_val = 0.0
 
 # YOUR CODE HERE
 raise NotImplementedError()
 
 return loss_val

Step by Step Solution

There are 3 Steps involved in it

Step: 1

blur-text-image

Get Instant Access to Expert-Tailored Solutions

See step-by-step solutions with expert insights and AI powered tools for academic success

Step: 2

blur-text-image

Step: 3

blur-text-image

Ace Your Homework with AI

Get the answers you need in no time with our AI-driven, step-by-step assistance

Get Started

Recommended Textbook for

Fundamentals Of Database Management Systems

Authors: Mark L. Gillenson

2nd Edition

0470624701, 978-0470624708

More Books

Students also viewed these Databases questions

Question

Explain how fraud investigations can detect fraud.

Answered: 1 week ago