Answered step by step
Verified Expert Solution
Question
1 Approved Answer
Machine learning question, use jupyter lab, no AI plz: Import this: import sys from packaging import version import sklearn import matplotlib.pyplot as plt import numpy
Machine learning question, use jupyter lab, no AI plz: Import this: import sys from packaging import version import sklearn import matplotlib.pyplot as plt import numpy as np from sklearn.preprocessing import adddummyfeature import numpy as np from sklearn import metrics #Import scikitlearn metrics module for accuracy calculation from sklearn.modelselection import traintestsplit from sklearn.metrics import confusionmatrix, ConfusionMatrixDisplay from sklearn.datasets import makeblobs printSklearn package",sysversioninfo printSklearn package",sklearn.version assert sysversioninfo #assert version.parsesklearnversion version.parse pltrcfont size pltrcaxes labelsize titlesize pltrclegend fontsize pltrcxtick labelsize pltrcytick labelsize Questions: Q Parameters Tuning For the dataset in X find good parameters than can achieve a very small loss. For example, you could get a loss Use the original gradient descent without early stopping for this question Run the gradient descent function for various settings, Use eta from this set Use nepochs from this set def fxw: returnxT@witem def JvectorizedXyw: m Xshape V X@wy sumsquarederror VT @ V return sumsquarederrormitem def JdeltavectorizedXyw: m Xshape sumw XT @ X@wy #return summ return sumwm def simplegradientvectorizedXytheta, nepochs, eta: thetapath theta for epoch in rangenepochs: gradients JdeltavectorizedXytheta theta theta eta gradients thetapath.appendtheta returnthetapath eta # learning rate nepochs m lenX # number of instances printXshape", Xshape nprandom.seed theta nprandom.randn # randomly initialized model parameters printLoss at initial theta",JvectorizedXytheta thetapath simplegradientvectorizedXytheta,nepochs, eta printLoss at final theta",JvectorizedXythetapath nprandom.seed theta nprandom.randn # randomly initialized model parameters besttheta theta besteta bestnepochs bestloss JvectorizedXytheta ##Your code here.. ## Iterate over the eta and nepochs and update the best values ## when the loss improves get smaller loss than bestloss printBest Loss", bestloss printBest theta", besttheta.ravel printBest eta", besteta printBest nepochs", bestnepochs #using sklearn package #using sklearn package #This code is provided to give you an idea of how small the loss can be #You should not aim to get the exact number, but you should get a close one. from sklearn.linearmodel import LinearRegression linreg LinearRegressionfitinterceptFalse linreg.fitX y besttheta linreg.coefreshape printJvectorizedXybesttheta
Machine learning question, use jupyter lab, no AI plz:
Import this:
import sys
from packaging import version
import sklearn
import matplotlib.pyplot as plt
import numpy as np
from sklearn.preprocessing import adddummyfeature
import numpy as np
from sklearn import metrics #Import scikitlearn metrics module for accuracy calculation
from sklearn.modelselection import traintestsplit
from sklearn.metrics import confusionmatrix, ConfusionMatrixDisplay
from sklearn.datasets import makeblobs
printSklearn package",sysversioninfo
printSklearn package",sklearn.version
assert sysversioninfo
#assert version.parsesklearnversion version.parse
pltrcfont size
pltrcaxes labelsize titlesize
pltrclegend fontsize
pltrcxtick labelsize
pltrcytick labelsize
Questions:
Q Parameters Tuning
For the dataset in X find good parameters than can achieve a very small loss.
For example, you could get a loss
Use the original gradient descent without early stopping for this question
Run the gradient descent function for various settings,
Use eta from this set
Use nepochs from this set
def fxw:
returnxT@witem
def JvectorizedXyw:
m Xshape
V X@wy
sumsquarederror VT @ V
return sumsquarederrormitem
def JdeltavectorizedXyw:
m Xshape
sumw XT @ X@wy
#return summ
return sumwm
def simplegradientvectorizedXytheta, nepochs, eta:
thetapath theta
for epoch in rangenepochs:
gradients JdeltavectorizedXytheta
theta theta eta gradients
thetapath.appendtheta
returnthetapath
eta # learning rate
nepochs
m lenX # number of instances
printXshape", Xshape
nprandom.seed
theta nprandom.randn # randomly initialized model parameters
printLoss at initial theta",JvectorizedXytheta
thetapath simplegradientvectorizedXytheta,nepochs, eta
printLoss at final theta",JvectorizedXythetapath
nprandom.seed
theta nprandom.randn # randomly initialized model parameters
besttheta theta
besteta
bestnepochs
bestloss JvectorizedXytheta
##Your code here..
## Iterate over the eta and nepochs and update the best values
## when the loss improves get smaller loss than bestloss
printBest Loss", bestloss
printBest theta", besttheta.ravel
printBest eta", besteta
printBest nepochs", bestnepochs
#using sklearn package
#using sklearn package
#This code is provided to give you an idea of how small the loss can be
#You should not aim to get the exact number, but you should get a close one.
from sklearn.linearmodel import LinearRegression
linreg LinearRegressionfitinterceptFalse
linreg.fitX y
besttheta linreg.coefreshape
printJvectorizedXybesttheta
Step by Step Solution
There are 3 Steps involved in it
Step: 1
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Step: 2
Step: 3
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started