0

I want to initialize the word embedding layer from a local numpy array with the same shape, which is a pre trained embedding from another model. It is OK if I did not add the partitioner param. def word_embedding(shape, dtype=tf.float32, name='word_embedding'):

  f = open('./cnn_embed_array', 'r')
  embedding_array = pickle.load(f)
  f.close()
  print 'embedding_array loaded......'
  with tf.device('/cpu:0'), tf.variable_scope(name):
    return tf.get_variable('embedding', shape, dtype=dtype, initializer=tf.constant_initializer(embedding_array), trainable = False) 

But if I add partitioner=tf.fixed_size_partitioner(20) in the tf.get_variable function, it give me error saying that the param is redundant.

partitioner param tends to accelerate the training speed. Can I add the param in some other way ?

1 Answer 1

1

If trainable = False, then the variable will not be learned and so partitioning won't help you. On the other hand, if you want this variable to be updated then you'll need to set trainable = True.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.