Answered step by step
Verified Expert Solution
Question
1 Approved Answer
The train ( . . . ) function will implement batch qradient descent to train the model. Use PyTorch autograd to estimate gradients and update
The train function will implement batch qradient descent to train the model.
Use PyTorch autograd to estimate gradients and update parameters during gradient descent.
def trainmodel numepochs learningrate seed logFalse:
torch.manualseedseed
for ii in rangenumepochs:
## Forward Propagation and loss ##
### BEGIN SOLUTION ###
### END SOLUTION ###
## Compute gradient and update parameters ##
## The grad function gets the gradients of the loss wrt parameters in a list
## The output gradients is a list of corresponding parameter gradients
gradients grads listmodelparameters.values
## Whenever a parameter changes during an operation,
## a graph is maintained to estimate the gradient
## PyTorch does not allow to modify such
## parameters in place for eg assigning an arbitrary value to the parameter
## Becasue such operations will not be able to estimate the gradient
## We therefore turnoff the gradient during parameter update
with torch.nograd:
for ly in rangemodelnumlayers:
## Access the gradients list to get and
## and update parameters using learingratedb
## For eg
## gradients is the gradient for W
## gradients is the gradient for b
## gradients is the gradient for W
## gradients is the gradient for b
## gradientsy is the gradient for
## gradients is the gradient for
### BEGIN SOLUTION ###
### END SOLUTION ###
## Print loss every epoch ##
## We do not need to track gradient during the evaluation
costappend lossdetach
if not ii:
printEpoch: d Loss: fii
Step by Step Solution
There are 3 Steps involved in it
Step: 1
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Step: 2
Step: 3
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started