Answered step by step
Verified Expert Solution
Link Copied!

Question

1 Approved Answer

When running the code below I am getting some errors (see image). The line, in the code below, that the error is coming from will

When running the code below I am getting some errors (see image). The line, in the code below, that the error is coming from will be highlighted in bold. Any help fixing it would be appreciated.

image text in transcribed

import numpy as np

import tensorflow as tf

import matplotlib.pyplot as plt

#1) Generate the synthetic data using the following Python code snippet.

# Generate synthetic data

N = 100

# Zeros form a Gaussian centered at (-1, -1)

x_zeros = np.random.multivariate_normal(mean=np.array((-1, -1)), cov=.1*np.eye(2), size=(N//2,))

y_zeros = np.zeros((N//2,))

# Ones form a Gaussian centered at (1, 1)

x_ones = np.random.multivariate_normal(mean=np.array((1, 1)), cov=.1*np.eye(2), size=(N//2,))

y_ones = np.ones((N//2,))

x_np = np.vstack([x_zeros, x_ones])

y_np = np.concatenate([y_zeros, y_ones])

# Plot x_zeros and x_ones on the same graph

plt.scatter(x_zeros[:,0], x_zeros[:,1], label='class 0')

plt.scatter(x_ones[:,0], x_ones[:,1], label='class 1')

plt.legend()

plt.show()

#3) Generate a TensorFlow graph.

with tf.name_scope("placeholders"):

x = tf.constant(x_np, dtype=tf.float32)

y = tf.constant(y_np, dtype=tf.float32)

with tf.name_scope("weights"):

W = tf.Variable(tf.random.normal((2, 1)))

b = tf.Variable(tf.random.normal((1,)))

with tf.name_scope("prediction"):

y_logit = tf.squeeze(tf.matmul(x, W) + b)

# the sigmoid gives the class probability of 1

y_one_prob = tf.sigmoid(y_logit)

# Rounding P(y=1) will give the correct prediction.

y_pred = tf.round(y_one_prob)

with tf.name_scope("loss"):

# Compute the cross-entropy term for each datapoint

entropy = tf.nn.sigmoid_cross_entropy_with_logits(logits=y_logit, labels=y)

# Sum all contributions

l = tf.reduce_sum(entropy)

with tf.name_scope("optim"):

train_op = tf.compat.v1.train.AdamOptimizer(.01).minimize(l)

with tf.name_scope("summaries"):

tf.compat.v1.summary.scalar("loss", l)

merged = tf.compat.v1.summary.merge_all()

train_writer = tf.compat.v1.summary.FileWriter('logistic-train', tf.compat.v1.get_default_graph())

#4) Train the model, get the weights, and make predictions.

with tf.compat.v1.Session() as sess:

# Initialize all variables

sess.run(tf.compat.v1.global_variables_initializer())

# Train the model for 100 epochs

for epoch in range(100):

# Run the train_op

_, summary, loss = sess.run([train_op, merged, l])

print("Epoch:", epoch, "Loss:", loss)

# Write the summary for TensorBoard

train_writer.add_summary(summary, epoch)

# Get the weights and biases

W_final, b_final = sess.run([W, b])

# Get the predictions

y_pred_np = sess.run(y_pred)

#5) Plot the predicted outputs on top of the data.

plt.scatter(x_np[y_pred_np==0,0], x_np[y_pred_np==0,1], label='class 0')

plt.scatter(x_np[y_pred_np==1,0], x_np[y_pred_np==1,1], label='class 1')

plt.legend()

plt.show()

Step by Step Solution

There are 3 Steps involved in it

Step: 1

blur-text-image

Get Instant Access to Expert-Tailored Solutions

See step-by-step solutions with expert insights and AI powered tools for academic success

Step: 2

blur-text-image

Step: 3

blur-text-image

Ace Your Homework with AI

Get the answers you need in no time with our AI-driven, step-by-step assistance

Get Started

Recommended Textbook for

Database Concepts

Authors: David Kroenke, David Auer, Scott Vandenberg, Robert Yoder

10th Edition

0137916787, 978-0137916788

More Books

Students also viewed these Databases questions