Answered step by step
Verified Expert Solution
Link Copied!

Question

1 Approved Answer

Name Id No . Branch Mechanical Note: Attempt all questions. Q . 1 - 1 6 carry 1 mark each and Q 1 7 &

Name
Id No.
Branch
Mechanical
Note: Attempt all questions. Q.1-16 carry 1 mark each and Q17&.18 carry 2 marks each. Give the answer by encircling the correct
option (a),(b),(c) or (d) or by writing as directed in question. Multiple answers for same question or changing the option by
cutting will be treated as wrong answer and no marks will be awarded.
1 S1: Perceptron is a two layer network and weights of both layers' neurons are trainable
S2: The input applied to first layer lies in image space and its output forms pattern space.
S3: It is necessary to make the input patterns to be linear separable before applying to second layer.
S4: The solution obtained on convergence by each perceptron is always unique.
True Statements (out of S1S2S3S4) are-
2 The XOR problem is said to be a benchmark problem because
(a) it can be dealt with a single node network
(b) It is a test problem with linearly non separable patterns
(c) It is a binary classification problem
(d) It is the simplest problem for binary classification
3 If the capacity of a perceptron to classify pattern is 8. The number of nodes in first layer of this perceptron is
4 S1: Perceptron was proposed by Rosenblatt as a two-layer feed-forward network of threshold logic units.
S2: Rosenblatt's perceptron evolved as merger of concepts proposed by McCulloch-Pitts and Hebb
False Statements (out of S1$2) are
5 While using fixed increment rule in perceptron learning algorithm, augmentation by appending +1 to pattern vector is to
(a) Convert pattern space to image space
(b) Convert image space to pattern space
(c) Provide the blas input to nodes
(d) Remove the effect of bias from nodes
6 S1: In method of steepest descent, weights are adjusted in the direction opposite to gradient vector.
S2: The convergence of steepest descent is guaranteed for any positive value of learning rate.
True Statements (out of S1S2) are-
7 S1: The LMS algorithm and Widrow-Hoff Rule are not similar in nature.
S2: As LMS algorithm depends on estimate of gradient vector, it cannot be referred as stochastic gradient algorithm.
S3: Though LMS is faster than steepest descent in convergence but it is not robust and model Independent.
True Statements (out of S1$2$3) are
8 In the steepest descent algorithm the trajectory of weight vector in eight space is
while in case of an LMS
algorithm it is
(a) Well-defined, random
(b) Random, weil-defined
(c) Well defined in both
(d) Random in both
9 S1: The feedback loop around the weight vector in LMS algorithm behaves like a high-pass filter.
S2: The inverse of the learning-rate parameter is a measure of the memory of the LMS algorithm.
S3: The LMS algorithm operates with linear neuron.
False Statements (out of $152?53) are-
10 In multilayer perceptron
S1: If activation function of all nodes is nonlinear, then it will reduce to single layer perceptron
S2: hidden neurons enable the network to learn complex tasks by extracting meaningful features from input patterns
S3:the network exhibits a low degree of connectivity
False Statements (out of S152?53) are-
11 In Back-Propagation algorithm, the use of minus sign in delta rule accounts for
(a) Gradient descent in error space
(b) Gradient ascent in error space
(c) Gradient descent in weight space
(d) Gradient ascent in weight space

Step by Step Solution

There are 3 Steps involved in it

Step: 1

blur-text-image

Get Instant Access to Expert-Tailored Solutions

See step-by-step solutions with expert insights and AI powered tools for academic success

Step: 2

blur-text-image

Step: 3

blur-text-image

Ace Your Homework with AI

Get the answers you need in no time with our AI-driven, step-by-step assistance

Get Started

Recommended Textbook for

More Books

Students also viewed these Databases questions