Answered step by step
Verified Expert Solution
Question
1 Approved Answer
Please solve using python. Please show the code and explain! Question 5: Questions on Perceptrons Two theoretical questions (i) Suppose we invoke the following: In
Please solve using python. Please show the code and explain!
Question 5: Questions on Perceptrons Two theoretical questions (i) Suppose we invoke the following: In [1]: # ppn = Perceptron(eta=0.1, n_iter=10, random_state=1) # ppn.fit(x, y) The training labels y are -1 and 1 on this data set. Suppose however that we change the training labels to 0 and 1 respectively. Let's say that the new labels are in a vector y1 Fill-out the missing argument so that the perceptron fit happens in an identical way to the above two lines. In [2]: N # ppn = Perceptron(eta= , niter- , random_state= ) # ppn.fit(x, y1) (ii) Suppose we have a 2-dimensional data set. Then we transform data point x(i) = (x ), x)) as follows: x{l) = (ax" c, bx) - c), where a, b, c are constant numbers. If our given data set is linearly separable, is the same true for the transformed one? In the following cells you can plot a transformed version of the Iris dataset, so that you see how it behaves (for your choice of a, b, c.) But you should also try and justify your answer in a theoretical way: if there exists a 'good' perceptron for the original data set, what would be the weights for the perceptron that works on the transformed set? In [ ]: N Question 5: Questions on Perceptrons Two theoretical questions (i) Suppose we invoke the following: In [1]: # ppn = Perceptron(eta=0.1, n_iter=10, random_state=1) # ppn.fit(x, y) The training labels y are -1 and 1 on this data set. Suppose however that we change the training labels to 0 and 1 respectively. Let's say that the new labels are in a vector y1 Fill-out the missing argument so that the perceptron fit happens in an identical way to the above two lines. In [2]: N # ppn = Perceptron(eta= , niter- , random_state= ) # ppn.fit(x, y1) (ii) Suppose we have a 2-dimensional data set. Then we transform data point x(i) = (x ), x)) as follows: x{l) = (ax" c, bx) - c), where a, b, c are constant numbers. If our given data set is linearly separable, is the same true for the transformed one? In the following cells you can plot a transformed version of the Iris dataset, so that you see how it behaves (for your choice of a, b, c.) But you should also try and justify your answer in a theoretical way: if there exists a 'good' perceptron for the original data set, what would be the weights for the perceptron that works on the transformed set? In [ ]: N
Step by Step Solution
There are 3 Steps involved in it
Step: 1
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Step: 2
Step: 3
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started