0

I am trying to run inference using the TFLite model of multipose-lightning-tflite-float16 distributed by movenet.

https://www.kaggle.com/models/google/movenet/tfLite

However, this model cannot be used unless you specify the height and width of the input tensor by calling resize_input_tensor before running allocate_tensor. I would like to use a model with a fixed input shape, so is there a way to save the model after running resize_input_tensor, or to fix the input tensor without resize_input_shape?

    # -->> https://www.kaggle.com/models/google/movenet/tfLite
    TFLITE_FILE_PATH = "/home/debian/sandbox/movenet/python/tflite/1.tflite"
    interpreter = tf.lite.Interpreter(TFLITE_FILE_PATH)

    input_details = interpreter.get_input_details()
    output_details = interpreter.get_output_details()
    print(input_details)
    print(output_details)

    is_dynamic_shape_model = input_details[0]['shape_signature'][2] == -1
    if is_dynamic_shape_model:
        input_tensor_index = input_details[0]['index']
        input_shape = input_image.shape
        interpreter.resize_tensor_input(input_tensor_index, input_shape, strict=True)

    ## -->> I WANT TO SAVE MODEL HERE.

    interpreter.allocate_tensors()
    interpreter.set_tensor(input_details[0]['index'], input_image.numpy())

    interpreter.invoke()

    keypoints_with_scores = interpreter.get_tensor(output_details[0]['index'])
    keypoints_with_scores = np.squeeze(keypoints_with_scores)
[{'name': 'serving_default_input:0', 'index': 0, 'shape': array([1, 1, 1, 3], dtype=int32), 'shape_signature': array([ 1, -1, -1,  3], dtype=int32), 'dtype': <class 'numpy.uint8'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}]
[{'name': 'StatefulPartitionedCall:0', 'index': 536, 'shape': array([ 1,  6, 56], dtype=int32), 'shape_signature': array([ 1,  6, 56], dtype=int32), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}]

1 Answer 1

0

I found out that there is no function to save the model after allocating_tensor.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.