Answered step by step
Verified Expert Solution
Link Copied!

Question

1 Approved Answer

Add code to the following python program so that it will run. The program trains a neural net using back-propagation to perform the XOR operation.

Add code to the following python program so that it will run. The program trains a neural net using back-propagation to perform the XOR operation. Finish the nnOutput and train functions so that the code will run. The program when run should print: 0.0310489 [[ 0.004588 ] [0.968793 ] [0.964671 ] [0.052381 ]]

import numpy as np

import pylab as pl

## you may need to add some other functions here

def nnOutput(alpha, beta, X):

## add your code here

return output;

################################################

## X is the input samples

## Y is the desired outputs

## nHiddens is the number of hidden units

## rhoh is the coefficient for alpha

## rhoo is the coefficient for beta

## mom is the momentum value

## wmax is the range of the weights

## nEpochs is the number of iterations

## alpha is the weights in the hidden layer

## beta is the weight in the output layer

################################################

def Train(X, Y, nHiddens, rhoh, rhoo, mom, wmax, nEpochs):

nSamples = np.shape(X)[0];

nInputs = np.shape(X)[1];

nOutputs = np.shape(Y)[1];

alpha = np.random.uniform(-wmax, wmax, (1+nInputs,nHiddens));

beta = np.random.uniform(-wmax, wmax, (1+nHiddens,nOutputs));

## add your code here

return alpha, beta;

def makeIndicatorVariables(labels, datasize, numClasses, class_labels):

indicator = np.zeros((datasize, numClasses));

## add your code here

return indicator;

def ComputeAccuracy(output, labels):

TP = 0;

FP = 0;

for i in range(0,np.shape(output)[0]):

predict = np.argmax(output[i,:]);

if (predict == labels[i]):

TP = TP + 1;

else:

FP = FP + 1;

Acc = 100 * TP / float(TP + FP);

print(Acc);

def DisplayDigit(sample, label):

sample = sample.reshape(28,28)

[fig, axs] = pl.subplots(1,1)

axs.imshow(sample, cmap='Greys_r')

print('The label is %d '%(label))

def RunDigit():

dataset = ReadDataSet()

dataset.Read('mnist.pkl')

[train_data, train_labels] = dataset.getTrainData()

## display one sample and its label

DisplayDigit(train_data[10], train_labels[10])

datasize = train_data.shape[0];

class_labels = sorted(np.unique(train_labels));

numClasses = len(class_labels);

indicator = makeIndicatorVariables(train_labels, datasize, numClasses, class_labels);

print('Training NN ...');

[alpha, beta] = Train(np.matrix(train_data), np.matrix(indicator), 500, 0.1, 0.001, 0.5, 0.0001, 1000);

print('Testing NN ...');

test_out = nnOutput(alpha, beta, train_data);

ComputeAccuracy(test_out, train_labels);

[test_data, test_labels] = dataset.getTestData()

test_out = nnOutput(alpha, beta, test_data);

ComputeAccuracy(test_out, test_labels);

if __name__ == '__main__':

RunDigit();

Step by Step Solution

There are 3 Steps involved in it

Step: 1

blur-text-image

Get Instant Access to Expert-Tailored Solutions

See step-by-step solutions with expert insights and AI powered tools for academic success

Step: 2

blur-text-image

Step: 3

blur-text-image

Ace Your Homework with AI

Get the answers you need in no time with our AI-driven, step-by-step assistance

Get Started

Recommended Textbook for

Essentials of Database Management

Authors: Jeffrey A. Hoffer, Heikki Topi, Ramesh Venkataraman

1st edition

133405680, 9780133547702 , 978-0133405682

More Books

Students also viewed these Databases questions

Question

What is the most important part of any HCM Project Map and why?

Answered: 1 week ago