Answered step by step
Verified Expert Solution
Link Copied!

Question

1 Approved Answer

# GRADED FUNCTION: Siamese def Siamese ( text _ vectorizer, vocab _ size = 3 6 2 2 4 , d _ feature = 1

# GRADED FUNCTION: Siamese
def Siamese(text_vectorizer, vocab_size=36224, d_feature=128):
"""Returns a Siamese model.
Args:
text_vectorizer (TextVectorization): TextVectorization instance, already adapted to your training data.
vocab_size (int, optional): Length of the vocabulary. Defaults to 56400.
d_model (int, optional): Depth of the model. Defaults to 128.
Returns:
tf.model.Model: A Siamese model.
"""
### START CODE HERE ###
branch = tf.keras.models.Sequential(name='sequential')
# Add the text_vectorizer layer. This is the text_vectorizer you instantiated and trained before
branch.add(text_vectorizer)
# Add the Embedding layer. Remember to call it 'embedding' using the parameter `name`
branch.add(tf.keras.layers.Embedding(
input_dim=vocab_size,
output_dim=d_feature,
embeddings_initializer='uniform',
embeddings_regularizer=None,
activity_regularizer=None,
embeddings_constraint=None,
mask_zero=False,
input_length=None,
sparse=False,
name='embedding'
)
)
# Add the LSTM layer, recall from W2 that you want to the LSTM layer to return sequences, ot just one value.
# Remember to call it 'LSTM' using the parameter `name`
branch.add(tf.keras.layers.LSTM(
units=d_feature,
activation='tanh',
recurrent_activation='sigmoid',
use_bias=True,
kernel_initializer='glorot_uniform',
recurrent_initializer='orthogonal',
bias_initializer='zeros',
unit_forget_bias=True,
kernel_regularizer=None,
recurrent_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
recurrent_constraint=None,
bias_constraint=None,
dropout=0.0,
recurrent_dropout=0.0,
return_sequences=True,
return_state=False,
go_backwards=False,
stateful=False,
time_major=False,
unroll=False,
name='LSTM'
)
)
# Add the GlobalAveragePooling1D layer. Remember to call it 'mean' using the parameter `name`
branch.add(tf.keras.layers.GlobalAveragePooling1D(
data_format='channels_last',
name='mean'
)
)
# Add the normalizing layer using the Lambda function. Remember to call it 'out' using the parameter `name`
branch.add(tf.keras.layers.Lambda(
lambda x: tf.math.l2_normalize(x),
name='out'
)
)
# Define both inputs. Remember to call then 'input_1' and 'input_2' using the `name` parameter.
# Be mindful of the data type and size
input1= tf.keras.layers.Input(None, dtype=None, name='input_1')
input2= tf.keras.layers.Input(None, dtype=None, name='input_2')
# Define the output of each branch of your Siamese network. Remember that both branches have the same coefficients,
# but they each receive different inputs.
branch1= None
branch2= None
# Define the Concatenate layer. You should concatenate columns, you can fix this using the `axis`parameter.
# This layer is applied over the outputs of each branch of the Siamese network
conc = tf.keras.layers.Concatenate(axis=None, name='conc_1_2')([None, None])
### END CODE HERE ###
return tf.keras.models.Model(inputs=[input1, input2], outputs=conc, name="SiameseModel")

Step by Step Solution

There are 3 Steps involved in it

Step: 1

blur-text-image

Get Instant Access to Expert-Tailored Solutions

See step-by-step solutions with expert insights and AI powered tools for academic success

Step: 2

blur-text-image

Step: 3

blur-text-image

Ace Your Homework with AI

Get the answers you need in no time with our AI-driven, step-by-step assistance

Get Started

Recommended Textbook for

Machine Learning And Knowledge Discovery In Databases European Conference Ecml Pkdd 2016 Riva Del Garda Italy September 19 23 2016 Proceedings Part 1 Lnai 9851

Authors: Paolo Frasconi ,Niels Landwehr ,Giuseppe Manco ,Jilles Vreeken

1st Edition

3319461273, 978-3319461274

More Books

Students also viewed these Databases questions

Question

2. What efforts are countries making to reverse the brain drain?

Answered: 1 week ago