Answered step by step
Verified Expert Solution
Link Copied!

Question

1 Approved Answer

Machine learning question, use jupyter lab, no AI plz: Import this: import sys from packaging import version import sklearn import matplotlib.pyplot as plt import numpy

Machine learning question, use jupyter lab, no AI plz:
Import this:
import sys
from packaging import version
import sklearn
import matplotlib.pyplot as plt
import numpy as np
from sklearn.preprocessing import add_dummy_feature
import numpy as np
from sklearn import metrics #Import scikit-learn metrics module for accuracy calculation
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay
from sklearn.datasets import make_blobs
print("Sklearn package",sys.version_info)
print("Sklearn package",sklearn.__version__)
assert sys.version_info >=(3,10)
#assert version.parse(sklearn.__version__)>= version.parse("1.2.1")
plt.rc('font', size=12)
plt.rc('axes', labelsize=14, titlesize=14)
plt.rc('legend', fontsize=14)
plt.rc('xtick', labelsize=10)
plt.rc('ytick', labelsize=10)
-----
Questions:
5. Q3 Parameters Tuning
5.1. For the dataset in X, find good parameters than can achieve a very small loss.
5.2. For example, you could get a loss 0.5037178103
5.3. Use the original gradient descent without early stopping for this question
5.3.1. Run the gradient descent function for various settings,
5.3.2. Use eta from this set [0.00001,0.0001,0.001,0.01,0.1,0.2,0.3]
5.3.3. Use n_epochs from this set [100,500,1000,5000,10000]
---
def f(x,w):
return((x.T@w).item())
def J_vectorized(X,y,w):
m= X.shape[0]
V = X@w-y
sum_squared_error = V.T @ V
return ((sum_squared_error/(2*m)).item())
def J_delta_vectorized(X,y,w):
m= X.shape[0]
sum_w = X.T @ (X@w-y)
#return ((sum*2/m))
return ((sum_w/m))
def simple_gradient_vectorized(X,y,theta, n_epochs, eta):
theta_path =[theta]
for epoch in range(n_epochs):
gradients = J_delta_vectorized(X,y,theta)
theta = theta - eta * gradients
theta_path.append(theta)
return(theta_path)
eta =0.01 # learning rate
n_epochs =100
m = len(X) # number of instances
print("X.shape", X.shape)
np.random.seed(42)
theta = np.random.randn(2,1) # randomly initialized model parameters
print("Loss at initial theta",J_vectorized(X,y,theta))
theta_path = simple_gradient_vectorized(X,y,theta,n_epochs, eta)
print("Loss at final theta",J_vectorized(X,y,theta_path[-1]))
np.random.seed(42)
theta = np.random.randn(2,1) # randomly initialized model parameters
best_theta = theta
best_eta =0.1
best_n_epochs =100
best_loss = J_vectorized(X,y,theta)
##Your code here..
## Iterate over the eta and n_epochs and update the best values
## when the loss improves (get smaller loss than best_loss)
print("Best Loss", best_loss)
print("Best theta", best_theta.ravel())
print("Best eta", best_eta)
print("Best n_epochs", best_n_epochs)
#using sklearn package
#using sklearn package
#This code is provided to give you an idea of how small the loss can be.
#You should not aim to get the exact number, but you should get a close one.
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression(fit_intercept=False)
lin_reg.fit(X, y)
best_theta = lin_reg.coef_.reshape(2,1)
print(J_vectorized(X,y,best_theta))

Step by Step Solution

There are 3 Steps involved in it

Step: 1

blur-text-image

Get Instant Access to Expert-Tailored Solutions

See step-by-step solutions with expert insights and AI powered tools for academic success

Step: 2

blur-text-image

Step: 3

blur-text-image

Ace Your Homework with AI

Get the answers you need in no time with our AI-driven, step-by-step assistance

Get Started

Students also viewed these Databases questions