Answered step by step
Verified Expert Solution
Link Copied!

Question

1 Approved Answer

Test Tensor flow with the below python codes at colab. import tensorflow as tf from keras.layers import Input, Dense from keras.models import Model import matplotlib.pyplot

Test Tensor flow with the below python codes at colab.
import tensorflow as tf
from keras.layers import Input, Dense
from keras.models import Model
import matplotlib.pyplot as plt
2. Test Tensor flow with linear regressions
import numpy as np
#Linear Regression with TensorFlow
from sklearn.datasets import fetch_california_housing
housing = fetch_california_housing()
m, n = housing.data.shape
housing_data_plus_bias = np.c_[np.ones((m,1)), housing.data]
X = tf.constant(housing_data_plus_bias, dtype=tf.float32, name="X")
y = tf.constant(housing.target.reshape(-1,1), dtype=tf.float32, name="y")
XT = tf.transpose(X)
theta = tf.matmul(tf.matmul(tf.linalg.inv(tf.matmul(XT, X)), XT), y)
print(theta)
3. Test Tensor flow with MNIST (handwriting) data set
# Plot ad hoc mnist instances
from keras.datasets import mnist
import matplotlib.pyplot as plt
# load (downloaded if needed) the MNIST dataset
(X_train, y_train),(X_test, y_test)= mnist.load_data()
# plot 4 images as gray scale
plt.subplot(221)
plt.imshow(X_train[0], cmap=plt.get_cmap('gray'))
plt.subplot(222)
plt.imshow(X_train[1], cmap=plt.get_cmap('gray'))
plt.subplot(223)
plt.imshow(X_train[2], cmap=plt.get_cmap('gray'))
plt.subplot(224)
plt.imshow(X_train[3], cmap=plt.get_cmap('gray'))
# show the plot
plt.show()
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.utils import to_categorical
# fix random seed for reproducibility
seed =7
np.random.seed(seed)
# load data
(X_train, y_train),(X_test, y_test)= mnist.load_data()
# flatten 28*28 images to a 784 vector for each image
num_pixels = X_train.shape[1]* X_train.shape[2]
X_train = X_train.reshape(X_train.shape[0], num_pixels).astype('float32')
X_test = X_test.reshape(X_test.shape[0], num_pixels).astype('float32')
# normalize inputs from 0-255 to 0-1
X_train = X_train /255
X_test = X_test /255
# one hot encode outputs
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
num_classes = y_test.shape[1]
# define baseline model
def baseline_model():
# reate model
model = Sequential()
model.add(Dense(num_pixels, input_dim=num_pixels, kernel_initializer='normal', activation='relu'))
model.add(Dense(num_classes, kernel_initializer='normal', activation='softmax'))
# Compile model
model . compile(loss='categorical_crossentropy' , optimizer='adam' , metrics=['accuracy'])
return model
# build the model
model = baseline_model()
# Fit the model
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=200, verbose=2)
# Final evaluation of the model
scores = model.evaluate(X_test, y_test, verbose=0)
print("Baseline Error: %.2f%%"%(100-scores[1]*100))
TO DO:
Derive the gradients for the below expressions:
vector x=[23], matrix A=[a1a2a3a4]
for the quadratic function f(x)=0.5**xTAx, prove that the gradient of the function gradf(x)=Ax.
Note that A=AT for a symmetric matrix.
ONLY step 4 has to be done!!
image text in transcribed

Step by Step Solution

There are 3 Steps involved in it

Step: 1

blur-text-image

Get Instant Access to Expert-Tailored Solutions

See step-by-step solutions with expert insights and AI powered tools for academic success

Step: 2

blur-text-image_2

Step: 3

blur-text-image_3

Ace Your Homework with AI

Get the answers you need in no time with our AI-driven, step-by-step assistance

Get Started

Recommended Textbook for

Advances In Database Technology Edbt 88 International Conference On Extending Database Technology Venice Italy March 14 18 1988 Proceedings Lncs 303

Authors: Joachim W. Schmidt ,Stefano Ceri ,Michele Missikoff

1988th Edition

3540190740, 978-3540190745

More Books

Students also viewed these Databases questions

Question

10. Describe the relationship between communication and power.

Answered: 1 week ago