Answered step by step
Verified Expert Solution
Link Copied!

Question

1 Approved Answer

Experiment sampling and clamping with the MNIST data and denoising autoencoders Steps in Assignment X_test = mnist.test.images[:n_test_digits] Markov chain: initialize state that is in the

Experiment sampling and clamping with the MNIST data and denoising autoencoders

Steps in Assignment

X_test = mnist.test.images[:n_test_digits]

Markov chain:

initialize state that is in the shape of X_test with random noise

state = outputs.eval(feed_dict={X: state})

state = state + noise_level * tf.random_normal(tf.shape(X_test))

state = outputs.eval(feed_dict={X: state}) ...

Clamping: need to assign the right half of X_test to state

Please modify the code to experiement with sampling and clamping with the MNIST data and denoising autoencoder by using python

# denoising autoencoder of Geron, using dropout import tensorflow as tf

n_inputs = 28 * 28 n_hidden1 = 300 n_hidden2 = 150 # codings n_hidden3 = n_hidden1 n_outputs = n_inputs

learning_rate = 0.01

dropout_rate = 0.3

training = tf.placeholder_with_default(False, shape=(), name='training')

X = tf.placeholder(tf.float32, shape=[None, n_inputs]) X_drop = tf.layers.dropout(X, dropout_rate, training=training)

hidden1 = tf.layers.dense(X_drop, n_hidden1, activation=tf.nn.relu, name="hidden1") hidden2 = tf.layers.dense(hidden1, n_hidden2, activation=tf.nn.relu, # not shown in the book name="hidden2") # not shown hidden3 = tf.layers.dense(hidden2, n_hidden3, activation=tf.nn.relu, # not shown name="hidden3") # not shown outputs = tf.layers.dense(hidden3, n_outputs, name="outputs") # not shown

reconstruction_loss = tf.reduce_mean(tf.square(outputs - X)) # MSE

optimizer = tf.train.AdamOptimizer(learning_rate) training_op = optimizer.minimize(reconstruction_loss) init = tf.global_variables_initializer()

from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("/tmp/data/")

n_epochs = 10 batch_size = 150

import sys

with tf.Session() as sess: init.run() for epoch in range(n_epochs): n_batches = mnist.train.num_examples // batch_size for iteration in range(n_batches): print(" {}%".format(100 * iteration // n_batches), end="") sys.stdout.flush() X_batch, y_batch = mnist.train.next_batch(batch_size) sess.run(training_op, feed_dict={X: X_batch, training: True}) loss_train = reconstruction_loss.eval(feed_dict={X: X_batch}) print(" {}".format(epoch), "Train MSE:", loss_train)

Step by Step Solution

There are 3 Steps involved in it

Step: 1

blur-text-image

Get Instant Access to Expert-Tailored Solutions

See step-by-step solutions with expert insights and AI powered tools for academic success

Step: 2

blur-text-image_2

Step: 3

blur-text-image_3

Ace Your Homework with AI

Get the answers you need in no time with our AI-driven, step-by-step assistance

Get Started

Recommended Textbook for

Pro SQL Server Wait Statistics

Authors: Enrico Van De Laar

1st Edition

1484211391, 9781484211397

More Books

Students also viewed these Databases questions

Question

l Identify the five steps in conducting a job analysis.

Answered: 1 week ago

Question

How do modern Dashboards differ from earlier implementations?

Answered: 1 week ago