Answered step by step
Verified Expert Solution
Link Copied!

Question

1 Approved Answer

Please mention your Python version ( and maybe the version of all other packages ) . In this exercise you are going to run some

Please mention your Python version (and maybe the version of all other packages). In this exercise you are going to run some experiments involving CNNs. You need to know Python and install the following libraries: Pytorch, matplotlib and all their dependencies. You can find detailed instructions and tutorials for each of these libraries on the respective websites. For all experiments, running on CPU is sufficient. You do not need to run the code on GPUs, although you could, using for instance Google Colab. Before start, we suggest you review what we learned about each layer in CNN, and read at least this tutorial.
1. Implement and train a VGG11 net on the MNIST dataset. VGG11 was an earlier version of VGG16 and can be found as model A in Table 1 of this paper, whose Section 2.1 also gives you all the details about each layer. The goal is to get the loss as close to 0 as possible. Note that our input dimension is different from the VGG paper. You need to resize each image in MNIST from its original size 28\times 28 to 32\times 32[why?]. For your convenience, we list the details of the VGG11 architecture here. The convolutional layers are denoted as Conv(number of input channels, number of output channels, kernel size, stride, padding); the batch normalization layers are denoted as BatchNorm(number of channels); the max-pooling layers are denoted as MaxPool(kernel size, stride); the fully-connected layers are denoted as FC(number of input features, number of output features); the drop out layers are denoted as Dropout(dropout ratio): - Conv(001,064,3,1,1)- BatchNorm(064)- ReLU - MaxPool(2,2)- Conv(064,128,3,1,1)- BatchNorm(128)- ReLU - MaxPool(2,2)- Conv(128,256,3,1,1)- BatchNorm(256)- ReLU - Conv(256,256,3,1,1)- BatchNorm(256)- ReLU - MaxPool(2,2)- Conv(256,512,3,1,1)- BatchNorm(512)- ReLU - Conv(512,512,3,1,1)- BatchNorm(512)- ReLU - MaxPool(2,2)- Conv(512,512,3,1,1)- BatchNorm(512)- ReLU - Conv(512,512,3,1,1)- BatchNorm(512)- ReLU - MaxPool(2,2)- FC(0512,4096)- ReLU - Dropout(0.5)- FC(4096,4096)- ReLU - Dropout(0.5)- FC(4096,10) You should use the cross-entropy loss torch.nn.CrossEntropyLoss at the end. [This experiment will take up to 1 hour on a CPU, so please be cautious of your time. If this running time is not bearable, you may cut the training set to 1/10, so only have 600 images per class instead of the regular 6000.]

Step by Step Solution

There are 3 Steps involved in it

Step: 1

blur-text-image

Get Instant Access to Expert-Tailored Solutions

See step-by-step solutions with expert insights and AI powered tools for academic success

Step: 2

blur-text-image

Step: 3

blur-text-image

Ace Your Homework with AI

Get the answers you need in no time with our AI-driven, step-by-step assistance

Get Started

Recommended Textbook for

Database Processing

Authors: David J. Auer David M. Kroenke

13th Edition

B01366W6DS, 978-0133058352

More Books

Students also viewed these Databases questions

Question

2. What efforts are countries making to reverse the brain drain?

Answered: 1 week ago