Answered step by step
Verified Expert Solution
Question
1 Approved Answer
Please solve this using python. Please show the code and explain! Question2: Experimenting with hyperparameters In [6]: N ppn = Perceptron(eta=0.0001, n_iter=20, random_state=1) ppn.fit(x, y)
Please solve this using python. Please show the code and explain!
Question2: Experimenting with hyperparameters In [6]: N ppn = Perceptron(eta=0.0001, n_iter=20, random_state=1) ppn.fit(x, y) plt.plot(range(1, len(ppn.errors_) + 1), ppn.errors_, marker='o') plt.xlabel('Epochs') plt.ylabel('Number of updates') plt.show() Number of updates 25 50 75 10.0 Epochs 12.5 15.0 17.5 20.0 Running the above code, you can verify if your modification in question 1 works correctly. The point of this question is to experiment with the different hyperparameters. Here are some specific questions: (i) Find the largest value of n for which the process takes more than 20 iterations to converge. Explain how you found that n (ii) Are you able to find n > 1 for which the process fails to converge in less than 30 iterations? (iii) Find two different settings for the random state, that give different convergence patterns, for the same n. Please give your answers in the cell below. The following code is the perceptron implementation from the textbook (with only three lines inserted). In [5]: Nimport numpy as np class Perceptron(object): ** Perceptron classifier. Parameters eta : float Learning rate (between 0.2 and 1.0) n iter : int Passes over the training dataset. random_state : int Random number generator seed for random weight initialization. Attributes W 1d-array Weights after fitting. errors: list Number of misclassifications (updates) in each epoch. def __init__(self, eta=0.01, n_iter-50, random_state-1): self.eta - eta self.n_iter = niter self.random_state = random_state def fit(self, x, y): **Fit training data. Parameters X : {array-like), shape = [n_examples, n_features] Training vectors, where n_examples is the number of examples and n_features is the number of features. y : array-like, shape = [n_examples] Target values. Returns self : object rgen - np.random. RandomState(self.random_state) self.w = rgen.normal(loc-2.0, scale-0.01, size-1 + X.shape[1]) self.errors = [] for _ in range(self.n_iter): errors = 0 for xi, target in zip(x, y): update = self.eta * (target - self.predict(xi)) self.w_[1:] += update * xi self.w_[@] += update errors += int(update != 0.6) self.errors_.append(errors) # my do-nothing code IK = 2020 # my do-nothing code return self def net input(self, x): **"Calculate net input" return np.dot(X, self.w_[1:)) + self.w_[@] def predict(self, x): ***"Return class label after unit step""" return np.where(self.net_input(x) >= 0.0, 1, -1) Question2: Experimenting with hyperparameters In [6]: N ppn = Perceptron(eta=0.0001, n_iter=20, random_state=1) ppn.fit(x, y) plt.plot(range(1, len(ppn.errors_) + 1), ppn.errors_, marker='o') plt.xlabel('Epochs') plt.ylabel('Number of updates') plt.show() Number of updates 25 50 75 10.0 Epochs 12.5 15.0 17.5 20.0 Running the above code, you can verify if your modification in question 1 works correctly. The point of this question is to experiment with the different hyperparameters. Here are some specific questions: (i) Find the largest value of n for which the process takes more than 20 iterations to converge. Explain how you found that n (ii) Are you able to find n > 1 for which the process fails to converge in less than 30 iterations? (iii) Find two different settings for the random state, that give different convergence patterns, for the same n. Please give your answers in the cell below. The following code is the perceptron implementation from the textbook (with only three lines inserted). In [5]: Nimport numpy as np class Perceptron(object): ** Perceptron classifier. Parameters eta : float Learning rate (between 0.2 and 1.0) n iter : int Passes over the training dataset. random_state : int Random number generator seed for random weight initialization. Attributes W 1d-array Weights after fitting. errors: list Number of misclassifications (updates) in each epoch. def __init__(self, eta=0.01, n_iter-50, random_state-1): self.eta - eta self.n_iter = niter self.random_state = random_state def fit(self, x, y): **Fit training data. Parameters X : {array-like), shape = [n_examples, n_features] Training vectors, where n_examples is the number of examples and n_features is the number of features. y : array-like, shape = [n_examples] Target values. Returns self : object rgen - np.random. RandomState(self.random_state) self.w = rgen.normal(loc-2.0, scale-0.01, size-1 + X.shape[1]) self.errors = [] for _ in range(self.n_iter): errors = 0 for xi, target in zip(x, y): update = self.eta * (target - self.predict(xi)) self.w_[1:] += update * xi self.w_[@] += update errors += int(update != 0.6) self.errors_.append(errors) # my do-nothing code IK = 2020 # my do-nothing code return self def net input(self, x): **"Calculate net input" return np.dot(X, self.w_[1:)) + self.w_[@] def predict(self, x): ***"Return class label after unit step""" return np.where(self.net_input(x) >= 0.0, 1, -1)Step by Step Solution
There are 3 Steps involved in it
Step: 1
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Step: 2
Step: 3
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started