Answered step by step
Verified Expert Solution
Link Copied!

Question

1 Approved Answer

Please solve this using python. Please show the code and explain it. Question 1. Perceptron Code Modification The following code is the perceptron implementation from

Please solve this using python. Please show the code and explain it.

image text in transcribedimage text in transcribed

Question 1. Perceptron Code Modification The following code is the perceptron implementation from the textbook (with only three lines inserted). In [5]; import numpy as np class Perceptron (object): """Perceptron classifier. Parameters eta : float Learning rate (between 0.0 and 1.0) n_iter : int Passes over the training dataset. random_state : int Random number generator seed for random weight initialization. Attributes w id-array Weights after fitting. errors : list Number of misclassifications (updates) in each epoch. def __init__(self, eta=0.81, n_iter=50, random_state=1): self.eta = eta self.n_iter = n_iter self.random_state = random_state def fit(self, x, y): "Fit training data. Parameters X : (array-like), shape = [n_examples, n_features] Training vectors, where n_examples is the number of examples and n_features is the number of features. y : array-like, shape = [n_examples] Target values. Returns self : object rgen = np.random. RandomState(self.random_state) self.w_ = rgen.normal (loc=0.0, scale=0.01, size=1 + X.shape[1]) self.errors_ = [] for - in range (self.n_iter): errors = 0 for xi, target in zip(x, y): update = self.eta * (target - self.predict(xi)> self.w_[1:] += update * xi self.w_[@] += update errors += int(update != 0.0) self.errors_.append(errors) for _ in range(self.n_iter): errors = 0 for xi, target in zip(x, y): update = self.eta * (target - self.predict(xi)) self.w_[1:] += update * xi self.w_[0] += update errors += int(update != 0.0) self.errors_.append(errors) # my do-nothing code IK = 2020 # my do-nothing code return self def net_input(self, x): """Calculate net input""" return np. dot(X, self.w_[1:]) = self.w_[0] def predict(self, X): """Return class label after unit step""" return np.where(self.net_input(x) >= 0.0, 1, -1) Work on the above cell and modify the code so that: (i) The fit function stops when no more iterations are necessary. (ii) The trained perceptron contains not only its weights, but also the number of iterations it took for training (iii) The perceptron maintains a history of its weights, i.e. the set of weights after each point is processed (optional-- but you can use this to verify your manual calculations) To modify the code please insert your code with clear comments surrounding it, similarly to "my do-nothing code". Make sure you evaluate the cell again, so that following cells will be using the modified perceptron. Question 1. Perceptron Code Modification The following code is the perceptron implementation from the textbook (with only three lines inserted). In [5]; import numpy as np class Perceptron (object): """Perceptron classifier. Parameters eta : float Learning rate (between 0.0 and 1.0) n_iter : int Passes over the training dataset. random_state : int Random number generator seed for random weight initialization. Attributes w id-array Weights after fitting. errors : list Number of misclassifications (updates) in each epoch. def __init__(self, eta=0.81, n_iter=50, random_state=1): self.eta = eta self.n_iter = n_iter self.random_state = random_state def fit(self, x, y): "Fit training data. Parameters X : (array-like), shape = [n_examples, n_features] Training vectors, where n_examples is the number of examples and n_features is the number of features. y : array-like, shape = [n_examples] Target values. Returns self : object rgen = np.random. RandomState(self.random_state) self.w_ = rgen.normal (loc=0.0, scale=0.01, size=1 + X.shape[1]) self.errors_ = [] for - in range (self.n_iter): errors = 0 for xi, target in zip(x, y): update = self.eta * (target - self.predict(xi)> self.w_[1:] += update * xi self.w_[@] += update errors += int(update != 0.0) self.errors_.append(errors) for _ in range(self.n_iter): errors = 0 for xi, target in zip(x, y): update = self.eta * (target - self.predict(xi)) self.w_[1:] += update * xi self.w_[0] += update errors += int(update != 0.0) self.errors_.append(errors) # my do-nothing code IK = 2020 # my do-nothing code return self def net_input(self, x): """Calculate net input""" return np. dot(X, self.w_[1:]) = self.w_[0] def predict(self, X): """Return class label after unit step""" return np.where(self.net_input(x) >= 0.0, 1, -1) Work on the above cell and modify the code so that: (i) The fit function stops when no more iterations are necessary. (ii) The trained perceptron contains not only its weights, but also the number of iterations it took for training (iii) The perceptron maintains a history of its weights, i.e. the set of weights after each point is processed (optional-- but you can use this to verify your manual calculations) To modify the code please insert your code with clear comments surrounding it, similarly to "my do-nothing code". Make sure you evaluate the cell again, so that following cells will be using the modified perceptron

Step by Step Solution

There are 3 Steps involved in it

Step: 1

blur-text-image

Get Instant Access to Expert-Tailored Solutions

See step-by-step solutions with expert insights and AI powered tools for academic success

Step: 2

blur-text-image

Step: 3

blur-text-image

Ace Your Homework with AI

Get the answers you need in no time with our AI-driven, step-by-step assistance

Get Started

Recommended Textbook for

Spomenik Monument Database

Authors: Donald Niebyl, FUEL, Damon Murray, Stephen Sorrell

1st Edition

0995745536, 978-0995745537

More Books

Students also viewed these Databases questions