Answered step by step
Verified Expert Solution
Link Copied!

Question

1 Approved Answer

PLEASE HELP IN PYTHON. A perceptron is a binary linear classifier used in supervised learning which helps to classify the given input data. According to

PLEASE HELP IN PYTHON.
A perceptron is a binary linear classifier used in supervised learning which helps to classify the given input data. According to perceptron learning, the algorithm automatically learns the optimal weight coefficients. The input features are then multiplied with these weights to determine if a neuron fires or not. The perceptron receives multiple input signals, and if the sum of the input signals exceeds a certain threshold, it either outputs a signal or does not return an output. In the context of supervised learning and classification, the perceptron can then be used to predict the class of a sample.
In python, mody the given code which has sections missing below for the perceptron section.
import numpy as np
import matplotlib.pyplot as plt
import PIL
from PIL import Image
#This Perceptron has to be trained to learn the '& problem': 1 & 1=1,1 & 0=0,0 & 1=1,0 & 0=1
#The first step is to define the hardlim function:
def hardlim(x):
#Modify Your code here
#Next we have to define our matrix of patterns P:
P=np.array([[1,0,0,1],
[1,0,1,0],
[1,1,1,1]])
#NOTICE that each pattern is represented as a column in P and although our patterns are bi-dimensional, each column has three components since the third component is just the input of the bias (always 1)
#Now let's define our traget matrix T:
T=np.array([1,0,0,0])
#The targets are 1 for the first column pattern in P and 0 for the rest.
#Let's set our weights W randomly:
W= np.array([0.041,-0.7,0.075])
#This is a 2 input, 1 output network (i.e. one single neuron with two inputs). The vector W of our single vectors however has 3 components because the third one represents the bias. I used this random numbers [0.041,-0.7,0.075] to initialize but any random numbers will suffice.
#Let's calculate the output of the net:
hardlim(np.dot(W,P[:,0]))
#0
#This is the output of my single neuron network for the first pattern P[:,0]. However, if we want to test the network with all the patterns at once, we can do it like this:
hardlim(np.dot(W,P))
#array([0,1,0,1])
#...and the error for all the patterns would be:
T-hardlim(np.dot(W,P))
#array([1,-1,0,-1])
#Now let's plot the patterns (red class 0 and blue class 1), along with the decision line of my network as it this far (not trained):
# Your code here to add the decision line to this plot
plt.plot([0,0,1],[0,1,0],'ro')
plt.plot([1],[1],'bo')
plt.show()
#NOTICE that the network is not performing well (as the error shows) just because its desision line has not been adjusted yet (the weights are not fine-tuned).
#Now, let's train the network (fine-tune its weights):
A=hardlim(np.dot(W,P))
E=T-A
while sum(E)!=0:
#Your code here to update W
A=hardlim(np.dot(W,P))
E=T-A
#After training let's plot again to see how the decision line was ajusted (weights fine-tuned) so the network now solves the problem flawlessly:
# Your code here to add the decision line to this plot
plt.plot([0,0,1],[0,1,0],'ro')
plt.plot([1],[1],'bo')
plt.show()
image text in transcribed

Step by Step Solution

There are 3 Steps involved in it

Step: 1

blur-text-image

Get Instant Access with AI-Powered Solutions

See step-by-step solutions with expert insights and AI powered tools for academic success

Step: 2

blur-text-image

Step: 3

blur-text-image

Ace Your Homework with AI

Get the answers you need in no time with our AI-driven, step-by-step assistance

Get Started

Students also viewed these Databases questions