Answered step by step
Verified Expert Solution
Question
1 Approved Answer
Name Id No . Branch Mechanical Note: Attempt all questions. Q . 1 - 1 6 carry 1 mark each and Q 1 7 &
Name
Id No
Branch
Mechanical
Note: Attempt all questions. Q carry mark each and Q& carry marks each. Give the answer by encircling the correct
option abc or d or by writing as directed in question. Multiple answers for same question or changing the option by
cutting will be treated as wrong answer and no marks will be awarded.
S: Perceptron is a two layer network and weights of both layers' neurons are trainable
S: The input applied to first layer lies in image space and its output forms pattern space.
S: It is necessary to make the input patterns to be linear separable before applying to second layer.
S: The solution obtained on convergence by each perceptron is always unique.
True Statements out of are
The XOR problem is said to be a benchmark problem because
a it can be dealt with a single node network
b It is a test problem with linearly non separable patterns
c It is a binary classification problem
d It is the simplest problem for binary classification
If the capacity of a perceptron to classify pattern is The number of nodes in first layer of this perceptron is
S: Perceptron was proposed by Rosenblatt as a twolayer feedforward network of threshold logic units.
S: Rosenblatt's perceptron evolved as merger of concepts proposed by McCullochPitts and Hebb
False Statements out of are
While using fixed increment rule in perceptron learning algorithm, augmentation by appending to pattern vector is to
a Convert pattern space to image space
b Convert image space to pattern space
c Provide the blas input to nodes
d Remove the effect of bias from nodes
S: In method of steepest descent, weights are adjusted in the direction opposite to gradient vector.
S: The convergence of steepest descent is guaranteed for any positive value of learning rate.
True Statements out of are
S: The LMS algorithm and WidrowHoff Rule are not similar in nature.
S: As LMS algorithm depends on estimate of gradient vector, it cannot be referred as stochastic gradient algorithm.
S: Though LMS is faster than steepest descent in convergence but it is not robust and model Independent.
True Statements out of are
In the steepest descent algorithm the trajectory of weight vector in eight space is
while in case of an LMS
algorithm it is
a Welldefined, random
b Random, weildefined
c Well defined in both
d Random in both
S: The feedback loop around the weight vector in LMS algorithm behaves like a highpass filter.
S: The inverse of the learningrate parameter is a measure of the memory of the LMS algorithm.
S: The LMS algorithm operates with linear neuron.
False Statements out of $ are
In multilayer perceptron
S: If activation function of all nodes is nonlinear, then it will reduce to single layer perceptron
S: hidden neurons enable the network to learn complex tasks by extracting meaningful features from input patterns
S:the network exhibits a low degree of connectivity
False Statements out of are
In BackPropagation algorithm, the use of minus sign in delta rule accounts for
a Gradient descent in error space
b Gradient ascent in error space
c Gradient descent in weight space
d Gradient ascent in weight space
Step by Step Solution
There are 3 Steps involved in it
Step: 1
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Step: 2
Step: 3
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started