0

I have built a classifier using tesnorflow. I generate proposal regions from images and those proposals are individually classified by my classifier.

My problem is that I do not have a constant batch size when evaluating my model. Because every image has a different number of proposals, the number of proposals to be evaluated for every image is not constant.

Right now I have set the batch size to 1, but this is inefficient and limits the processing speed of my classifier.

Below is the placeholder for the input to the model

self.image_op = tf.placeholder(tf.float32, shape=[batch_size, 48, 48, 3], name='input_image')

And this is how I feed the input to the model

def predict(self,image):
    cls_prob = self.sess.run([self.cls_prob], feed_dict={self.image_op: image})
    return cls_prob

Is there any way of setting the batch size to a dynamic value without having to restore the model for every image?

2
  • 1
    try with self.image_op = tf.placeholder(tf.float32, shape=[None, 48, 48, 3], name='input_image'). It should take variable batch sizes Commented Nov 22, 2018 at 4:29
  • That works. Thanks! Commented Nov 27, 2018 at 1:51

1 Answer 1

1

You can simply set tf.Variable(validate_shape=False)

This will disable the validation of shape on iterations and therefore you will be able to use dynamic batch sizes.

Since tf.placeholder is being depreciated you should not use it, but if you still want to use tf.placeholder then you need to disable TF 2.x behaviour

import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.