0

I want to run a custom tflite model on Android using TensorFlowLite (and using Kotlin). Despite using the TFLite support library to create a supposedly correctly shaped input and output buffer I get the following error message everytime I'm calling my run() method.

Here is my class:

class Inference(context: Context) {
    private val tag = "Inference"
    private var interpreter: Interpreter
    private var inputBuffer: TensorBuffer
    private var outputBuffer: TensorBuffer

    init {
        val mappedByteBuffer= FileUtil.loadMappedFile(context, "CNN_ReLU.tflite")
        interpreter = Interpreter(mappedByteBuffer as ByteBuffer)
        interpreter.allocateTensors()

        val inputShape = interpreter.getInputTensor(0).shape()
        val outputShape = interpreter.getOutputTensor(0).shape()

        inputBuffer = TensorBuffer.createFixedSize(inputShape, DataType.FLOAT32)
        outputBuffer = TensorBuffer.createFixedSize(outputShape, DataType.FLOAT32)
    }

    fun run() {
        interpreter.run(inputBuffer.buffer, outputBuffer.buffer) // XXX: generates error message
    }
}

And this is the error Message:

W/System.err: java.nio.BufferOverflowException
W/System.err:     at java.nio.ByteBuffer.put(ByteBuffer.java:615)
W/System.err:     at org.tensorflow.lite.Tensor.copyTo(Tensor.java:264)
W/System.err:     at org.tensorflow.lite.Tensor.copyTo(Tensor.java:254)
W/System.err:     at org.tensorflow.lite.NativeInterpreterWrapper.run(NativeInterpreterWrapper.java:170)
W/System.err:     at org.tensorflow.lite.Interpreter.runForMultipleInputsOutputs(Interpreter.java:347)
W/System.err:     at org.tensorflow.lite.Interpreter.run(Interpreter.java:306)

I have only initialized the input and output buffers and did not write any data to it yet.

I'm using these gradle dependencies:

implementation 'org.tensorflow:tensorflow-lite:0.0.0-nightly'
implementation 'org.tensorflow:tensorflow-lite-gpu:0.0.0-nightly'
implementation 'org.tensorflow:tensorflow-lite-support:0.0.0-nightly'

The .tflite model was built with these TensorFlow versions:

tensorflow                        2.3.0
tensorflow-cpu                    2.2.0
tensorflow-datasets               3.1.0
tensorflow-estimator              2.3.0
tensorflow-gan                    2.0.0
tensorflow-hub                    0.7.0
tensorflow-metadata               0.22.0
tensorflow-probability            0.7.0
tensorflowjs                      1.7.4.post1

Any thoughts or hints are highly appreciated, thank you.

3
  • what happens if you change the datatype to something smaller than float32? Commented Sep 4, 2020 at 13:45
  • @mangusta When using UINT8 as data type I get java.lang.IllegalArgumentException: Cannot copy to a TensorFlowLite tensor (input_1) with 3520 bytes from a Java Buffer with 880 bytes. which makes sense I guess. Commented Sep 4, 2020 at 13:59
  • have you tried another model? Commented Sep 5, 2020 at 4:00

1 Answer 1

1

Does adding .rewind() to your input and output buffer make it work? If not, I wonder if your input or output tensor is dynamic tensor? In which case the return shape is not usable this way.

Sign up to request clarification or add additional context in comments.

1 Comment

calling outputBuffer.buffer.rewind() before interpreter.run() indeed gets rid of the BufferOverflowException! It also seems that calling clear() instead of rewind() has the same effect. Can you give some more context why that is? I'm not familiar with the ByteBuffer class at all.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.