Answered step by step
Verified Expert Solution
Link Copied!

Question

1 Approved Answer

1 . [ 1 0 points ] Consider a single - input neuron with a bias. We would like the output to be 0 for

1.[10 points] Consider a single-input neuron with a bias. We would like the output to be 0 for inputs less than 3 and 1 for inputs greater than or equal to 3.
a. What kind of transfer function is required?
b. What weight and bias would you suggest? Is your bias in any way related to the weight? If yes, how?
c. Summarize your network by naming the transfer function and stating the bias and the weight. Draw a diagram of the network.
d. Write a MATLAB function that implements your transfer function (or use the appropriate transfer function from the NND code). Then write a neuron function that uses that transfer function to implement your neuron design. Make sure you follow the convention of putting each MATLAB function in a .m file of the same name. Verify using MATLAB that it performs as expected. Include both functions and demonstration output in your document.
2.[10 points] Modify your single-input neuron function so that it takes multiple inputs and computes the (scalar) output for that neuron. This function should take three arguments: an input vector, a weight vector, and a bias (also a scalar). This function should call the same transfer function as the single-input neuron function from question 1. Implement this multi-input neuron function two ways:
a. Use a for loop to iterate over the inputs and weights to compute their inner product.
b. Use the fact that MATLAB can directly multiply vectors, plus the MATLAB transpose (') operator, to compute the inner product in a single mathematical expression.
Note that you can get the size of a MATLAB vector using the length() function; this should allow you to write a function that can determine how many inputs are presented to the neuron.
3.[50 points] Implement a PerceptronLayer class that implements a single layer of multiple neurons of the type from question 2. The class should have the following characteristics:
The constructor takes three arguments and can be called one of two ways, depending on the first two arguments: with two scalar arguments (the number of inputs and and number of outputs) or with a weight matrix and bias vector. (You can check the dimensionality of the first two arguments in the constructor using the size() function to determine which is the case.) In the former case, it should pre-allocate a weight matrix and a bias vector of correct dimensions with random values uniformly distributed in the range [-1,+1]. In the latter case, it should set the internal weight matrix and bias vector from those arguments (and, of course, the number of inputs and outputs for the layer are determined by the dimensions of the weight matrix). The third argument to the constructor should be a string: the name of the transfer function to use. A PerceptronLayer object should remember this, so that the forward() method will use it.
Private transfer function methods. At a minimum, implement hardlim() and hardlims().(You can always add other transfer functions later.)
A forward() method should take an input vector and return an output vector. This implements the operations that computes the output of a layer of (multiple) neurons. This method should call the private transfer method that was indicated at construction time. Implement this method two ways (i.e., implement it twice; first one way and then the other):
a. Use a for loop to iterate over the neurons in the layer, computing each neuron's output and appending its output to the output vector (thus assembling the layer's output vector sequentially).
b. Use MATLAB matrix/vector mathematical operations to compute the outputs of all of the neurons in the layer in parallel, using no loop. (Likely, you will want to start with ensuring that your transfer function works for vector-valued inputs.)
Use MATLAB to demonstrate that each of the two above implementations will work for a two-input, two-output network in which the inputs are binary (0 or 1) and the outputs are binary (0 or 1). So, for the four possible input patterns (think that these are binary patterns), one of the layer's outputs will be values (one value for each of the four input patterns) that correspond to the logical AND of the input values and the other output will be the logical OR of the input values. Note that you are not yet implementing a learning algorithm; a little trial and error should allow you to set reasonable weight and bias values. Include in your document your final code (in an appendix, with vectorized forward method) and example runs that show the weights and biases and demonstrate that your functions work.
Note that you will be building future implementations from this starting point, so spend some time documenting and designing it for future expansion. Note further that the above implementation uses transfer functions that are methods of the PerceptronLayer class, with a string used to select which one the object uses. An acceptable alternative implementation to this would be to write the transfe

Step by Step Solution

There are 3 Steps involved in it

Step: 1

blur-text-image

Get Instant Access to Expert-Tailored Solutions

See step-by-step solutions with expert insights and AI powered tools for academic success

Step: 2

blur-text-image

Step: 3

blur-text-image

Ace Your Homework with AI

Get the answers you need in no time with our AI-driven, step-by-step assistance

Get Started

Recommended Textbook for

Next Generation Databases NoSQLand Big Data

Authors: Guy Harrison

1st Edition

1484213300, 978-1484213308

More Books

Students also viewed these Databases questions