Question
Training loop Let's now write the training loop (this step is similar to what was done in previous labs). Complete the code to implement the
Training loop
Let's now write the training loop (this step is similar to what was done in previous labs).
Complete the code to implement the training loop:
# Hyperparameters
lr = 0.01
batch_size = 100
num_epoch = 500
hidden_size = 1
num_layers = 2
# Initialize the Vanilla RNN
# Your code here. Aim for 1 line
rnn =
# Initialize the Loss. Please, use torch.nn.BCEWithLogitsLoss
# Your code here. Aim for 1 line
loss =
# Initialize the Optimizer. Please, use torch.optim.Adam (with the lr specified above)
# Your code here. Aim for 1 line
optimizer =
# Training Loop
for epoch in range(num_epoch):
for i in range(0, N, batch_size):
# Read minibatches (for both X and y)
# Your code here. Aim for 2 lines
Xi =
yi =
# Run the model
# Your code here. Aim for 1 line
logits =
# Compute the loss (use l as variable for the loss)
# Your code here. Aim for 1 line
l =
# Update the parameters
# Your code here. Aim for 3 lines
# Print loss
if (epoch + 1) % 50 == 0:
print("Epoch %03d: Train_loss: %.4f " %(epoch+1, l.item()))
Step by Step Solution
There are 3 Steps involved in it
Step: 1
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Step: 2
Step: 3
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started