I want to initialize the word embedding layer from a local numpy array with the same shape, which is a pre trained embedding from another model. It is OK if I did not add the partitioner param. def word_embedding(shape, dtype=tf.float32, name='word_embedding'):
f = open('./cnn_embed_array', 'r')
embedding_array = pickle.load(f)
f.close()
print 'embedding_array loaded......'
with tf.device('/cpu:0'), tf.variable_scope(name):
return tf.get_variable('embedding', shape, dtype=dtype, initializer=tf.constant_initializer(embedding_array), trainable = False)
But if I add partitioner=tf.fixed_size_partitioner(20) in the tf.get_variable function, it give me error saying that the param is redundant.
partitioner param tends to accelerate the training speed. Can I add the param in some other way ?