0

I recerntly tried to build an Object Detection Android App using TFLite model. I built my own custom model (a Keras Model in HDF5 format) and converted the model succesfully into a custom TFLite model using the following command:

tflite_convert --keras_model_file=detect.h5 --output_file=detect.tflite --output_format=TFLITE --input_shapes=1,300,300,3 --input_arrays=normalized_input_image_tensor --output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3' --inference_type=QUANTIZED_UINT8 --mean_values=128 --std_dev_values=127 --change_concat_input_ranges=false --allow_custom_ops

I further added the associated MetaData to this particular model using this code:

import tensorflow as tf from tflite_support import metadata as _metadata

populator = _metadata.MetadataPopulator.with_model_file("detect.tflite") populator.load_associated_files(["labelmap.txt"]) populator.populate()

I then configured this model in the Android Package Example by tensorflow and made some tweaks to the Build.gradle file, DetectorActivity.java and TFLiteObjectDetectionAPIModel.java, respectively. I also made some UI changes according to what and how I needed it to look like. Additionally, I had to change the 'numBytesPerChannel' value for Float model from '4' to '3' since I was getting an error like this:

Cannot convert between a TensorFlowLite buffer with XYZ bytes and a ByteBuffer with ABC bytes

The build is successful yet the debugger throws me a fatal exception of "BufferOverFlowError".

11/13 14:57:02: Launching 'app' on Physical Device. Install successfully finished in 16 s 851 ms. $ adb shell am start -n "org.tensorflow.lite.examples.detection/org.tensorflow.lite.examples.detection.DetectorActivity" -a android.intent.action.MAIN -c android.intent.category.LAUNCHER -D Waiting for application to come online: org.tensorflow.lite.examples.detection.test | org.tensorflow.lite.examples.detection Waiting for application to come online: org.tensorflow.lite.examples.detection.test | org.tensorflow.lite.examples.detection Connected to process 22667 on device 'samsung-sm_m315f-RZ8N50B0M5K'. Waiting for application to come online: org.tensorflow.lite.examples.detection.test | org.tensorflow.lite.examples.detection Connecting to org.tensorflow.lite.examples.detection Connected to the target VM, address: 'localhost:46069', transport: 'socket' Capturing and displaying logcat messages from application. This behavior can be disabled in the "Logcat output" section of the "Debugger" settings page. I/mples.detectio: Late-enabling -Xcheck:jni E/mples.detectio: Unknown bits set in runtime_flags: 0x8000 D/ActivityThread: setConscryptValidator setConscryptValidator - put W/ActivityThread: Application org.tensorflow.lite.examples.detection is waiting for the debugger on port 8100... I/System.out: Sending WAIT chunk I/System.out: Debugger has connected waiting for debugger to settle... I/chatty: uid=10379(org.tensorflow.lite.examples.detection) identical 1 line I/System.out: waiting for debugger to settle... I/System.out: waiting for debugger to settle... I/System.out: waiting for debugger to settle... I/System.out: waiting for debugger to settle... I/System.out: waiting for debugger to settle... I/System.out: waiting for debugger to settle... I/chatty: uid=10379(org.tensorflow.lite.examples.detection) identical 2 lines I/System.out: waiting for debugger to settle... I/System.out: waiting for debugger to settle... I/System.out: debugger has settled (1478) I/mples.detectio: Waiting for a blocking GC ClassLinker I/mples.detectio: WaitForGcToComplete blocked ClassLinker on ClassLinker for 7.502ms D/tensorflow: CameraActivity: onCreate org.tensorflow.lite.examples.detection.DetectorActivity@4d5b875 D/PhoneWindow: forceLight changed to true [] from com.android.internal.policy.PhoneWindow.updateForceLightNavigationBar:4274 com.android.internal.policy.DecorView.updateColorViews:1547 com.android.internal.policy.PhoneWindow.dispatchWindowAttributesChanged:3252 android.view.Window.setFlags:1153 com.android.internal.policy.PhoneWindow.generateLayout:2474 I/MultiWindowDecorSupport: [INFO] isPopOver = false I/MultiWindowDecorSupport: updateCaptionType >> DecorView@59812d[], isFloating: false, isApplication: true, hasWindowDecorCaption: false, hasWindowControllerCallback: true D/MultiWindowDecorSupport: setCaptionType = 0, DecorView = DecorView@59812d[] W/mples.detectio: Accessing hidden method Landroid/view/View;->computeFitSystemWindows(Landroid/graphics/Rect;Landroid/graphics/Rect;)Z (greylist, reflection, allowed) W/mples.detectio: Accessing hidden method Landroid/view/ViewGroup;->makeOptionalFitsSystemWindows()V (greylist, reflection, allowed) I/CameraManagerGlobal: Connecting to camera service D/VendorTagDescriptor: addVendorDescriptor: vendor tag id 3854507339 added I/CameraManagerGlobal: Camera 0 facing CAMERA_FACING_BACK state now CAMERA_STATE_CLOSED for client com.snapchat.android API Level 1 I/CameraManagerGlobal: Camera 1 facing CAMERA_FACING_FRONT state now CAMERA_STATE_CLOSED for client com.dolby.dolby234 API Level 2 I/CameraManagerGlobal: Camera 2 facing CAMERA_FACING_BACK state now CAMERA_STATE_CLOSED for client com.whatsapp API Level 1 I/CameraManagerGlobal: Camera 20 facing CAMERA_FACING_BACK state now CAMERA_STATE_CLOSED for client android.system API Level 2 I/CameraManagerGlobal: Camera 23 facing CAMERA_FACING_BACK state now CAMERA_STATE_CLOSED for client android.system API Level 2 I/CameraManagerGlobal: Camera 3 facing CAMERA_FACING_FRONT state now CAMERA_STATE_CLOSED for client com.sec.android.app.camera API Level 2 I/CameraManagerGlobal: Camera 4 facing CAMERA_FACING_FRONT state now CAMERA_STATE_CLOSED for client vendor.client.pid<4503> API Level 2 I/CameraManagerGlobal: Camera 50 facing CAMERA_FACING_BACK state now CAMERA_STATE_CLOSED for client com.sec.android.app.camera API Level 2 I/CameraManagerGlobal: Camera 52 facing CAMERA_FACING_BACK state now CAMERA_STATE_CLOSED for client android.system API Level 2 I/CameraManagerGlobal: Camera 54 facing CAMERA_FACING_BACK state now CAMERA_STATE_CLOSED for client android.system API Level 2 I/tensorflow: CameraActivity: Camera API lv2?: false D/tensorflow: CameraActivity: onStart org.tensorflow.lite.examples.detection.DetectorActivity@4d5b875 D/tensorflow: CameraActivity: onResume org.tensorflow.lite.examples.detection.DetectorActivity@4d5b875 I/ViewRootImpl@a101c3c[DetectorActivity]: setView = com.android.internal.policy.DecorView@59812d TM=true MM=false I/ViewRootImpl@a101c3c[DetectorActivity]: Relayout returned: old=(0,0,1080,2340) new=(0,0,1080,2340) req=(1080,2340)0 dur=31 res=0x7 s={true 532883185664} ch=true D/OpenGLRenderer: createReliableSurface : 0x7c1211ecc0(0x7c12502000) D/OpenGLRenderer: makeCurrent EglSurface : 0x0 -> 0x0 I/mali_winsys: new_window_surface() [1080x2340] return: 0x3000 D/OpenGLRenderer: eglCreateWindowSurface : 0x7c120c3600 I/CameraManagerGlobal: Camera 0 facing CAMERA_FACING_BACK state now CAMERA_STATE_OPEN for client org.tensorflow.lite.examples.detection API Level 1 I/tensorflow: CameraConnectionFragment: Desired size: 640x480, min size: 480x480 I/tensorflow: CameraConnectionFragment: Valid preview sizes: [1920x1080, 1440x1080, 1280x720, 1088x1088, 1024x768, 960x720, 720x720, 720x480, 640x480] I/tensorflow: CameraConnectionFragment: Rejected preview sizes: [800x450, 640x360, 352x288, 320x240, 256x144, 176x144] CameraConnectionFragment: Exact size match found. W/Gralloc3: mapper 3.x is not supported I/gralloc: Arm Module v1.0 W/Gralloc3: allocator 3.x is not supported D/OpenGLRenderer: makeCurrent EglSurface : 0x0 -> 0x7c120c3600 I/Choreographer: Skipped 34 frames! The application may be doing too much work on its main thread. I/ViewRootImpl@a101c3c[DetectorActivity]: MSG_WINDOW_FOCUS_CHANGED 1 1 D/InputMethodManager: prepareNavigationBarInfo() DecorView@59812d[DetectorActivity] D/InputMethodManager: getNavigationBarColor() -855310 D/InputMethodManager: prepareNavigationBarInfo() DecorView@59812d[DetectorActivity] D/InputMethodManager: getNavigationBarColor() -855310 V/InputMethodManager: Starting input: tba=org.tensorflow.lite.examples.detection ic=null mNaviBarColor -855310 mIsGetNaviBarColorSuccess true , NavVisible : true , NavTrans : false D/InputMethodManager: startInputInner - Id : 0 I/InputMethodManager: startInputInner - mService.startInputOrWindowGainedFocus I/ViewRootImpl@a101c3c[DetectorActivity]: MSG_RESIZED: frame=(0,0,1080,2340) ci=(0,83,0,39) vi=(0,83,0,39) or=1 D/InputMethodManager: prepareNavigationBarInfo() DecorView@59812d[DetectorActivity] getNavigationBarColor() -855310 V/InputMethodManager: Starting input: tba=org.tensorflow.lite.examples.detection ic=null mNaviBarColor -855310 mIsGetNaviBarColorSuccess true , NavVisible : true , NavTrans : false D/InputMethodManager: startInputInner - Id : 0 I/CameraManagerGlobal: Camera 0 facing CAMERA_FACING_BACK state now CAMERA_STATE_ACTIVE for client org.tensorflow.lite.examples.detection API Level 1 W/TFLiteObjectDetectionAPIModelWithInterpreter: cow1 cow2 cow3 cow4 W/TFLiteObjectDetectionAPIModelWithInterpreter: cow5 cow6 I/tflite: Initialized TensorFlow Lite runtime. I/tensorflow: DetectorActivity: Camera orientation relative to screen canvas: 90 I/tensorflow: DetectorActivity: Initializing at size 640x480 I/tensorflow: DetectorActivity: Preparing image 1 for detection in bg thread. W/System: A resource failed to call close. I/tensorflow: DetectorActivity: Running detection on image 1

E/AndroidRuntime: FATAL EXCEPTION: inference
Process: org.tensorflow.lite.examples.detection, PID: 22667
java.nio.BufferOverflowException
at java.nio.Buffer.nextPutIndex(Buffer.java:542)
at java.nio.DirectByteBuffer.putFloat(DirectByteBuffer.java:809)
at org.tensorflow.lite.examples.detection.tflite.TFLiteObjectDetectionAPIModel.recognizeImage(TFLiteObjectDetectionAPIModel.java:187)
at org.tensorflow.lite.examples.detection.DetectorActivity$2.run(DetectorActivity.java:183)
at android.os.Handler.handleCallback(Handler.java:883)
at android.os.Handler.dispatchMessage(Handler.java:100)
at android.os.Looper.loop(Looper.java:237)
at android.os.HandlerThread.run(HandlerThread.java:67)
I/Process: Sending signal. PID: 22667 SIG: 9
Disconnected from the target VM, address: 'localhost:46069', transport: 'socket'

The error suggests a change in these lines:

In TFLiteObjectDetectionAPIModel.java:

private static final float IMAGE_MEAN = 127.5f;
private static final float IMAGE_STD = 127.5f;
//...

@override
protected void addPixelValue(int pixelValue) {
imgData.putFloat((((pixelValue >> 16) & 0xFF) - IMAGE_MEAN) / IMAGE_STD);
imgData.putFloat((((pixelValue >> 8) & 0xFF) - IMAGE_MEAN) / IMAGE_STD);
imgData.putFloat(((pixelValue & 0xFF) - IMAGE_MEAN) / IMAGE_STD);
}

In DetectorActivity.java:

@override
public void run() {
LOGGER.i("Running detection on image " + currTimestamp);
final long startTime = SystemClock.uptimeMillis();
final List<Detector.Recognition> results = detector.recognizeImage(croppedBitmap);

lastProcessingTimeMs = SystemClock.uptimeMillis() - startTime;

Please let me know if I missed any step or did anything wrong.

P. S. - I used a dull-trained model before this and the app worked just fine, except for the fact that it showed all the boundary boxes at once with negligible changes in any detections. I am currently using a well trained model which looks like this (via netron):

TFLite Model

3
  • What's the input of your custom model ? Can you give us the ABC and XYZ Commented Nov 13, 2020 at 13:11
  • Hi @T.K The input to the model is (300,300,3)i.e. 270000 Bytes. If I change the value of 'numBytesPerChannel' it throws an error for size (300,300,4) i.e. 360000 Bytes. Commented Nov 14, 2020 at 13:29
  • Welcome to StackOverflow! Yesterday meet same error when tried give to output already used (not empty) buffer. If you made changes in model/used other model, make sure detector knows exact input/output sizes, look at outputBuffer dims, line 201 code Commented Nov 14, 2020 at 15:00

2 Answers 2

1

I was able to solve the error. So basically, the tflite model that I used had a highly big input size. This happened since I used a custom model (this means, the fine-tuning was random and the model was hence not compatible with the android's TFLiteObjectDetectionAPI. Another thing here is, I accidentally used mobilenet-v2 model as a reference model to train my own model whereas the default model that the TFLiteObjectDetectionAPI uses ssd-mobilenet-v1. I don't think this has anything to do with my error but this might throw a compatibility exception and hence lead to some ambiguous errors.

Thus I used this link to train my model with custom parameters and had to make some changes in the pipeline.config file. Otherwise, the model that I've trained apparently works just fine and gives me apt results and is 70% accurate which is enough for my purpose as of now.

Thank you @Saeid, @T.K and @Alex for your help. I appreciate the structure of Tensorflow's training work-flow.

I have solved this issue right now and this was a small error when I think of it. Do let me know if there's anything else too! Ciao!

Sign up to request clarification or add additional context in comments.

1 Comment

Glad to know your issue is resolved Abhi. Sounds like your project is moving forward and exciting things are ahead. Good luck!
0

I have encountered this error before as well. Usually this happens when 2 things are not correct in your code that feeds data into your model:

  1. The image size is incorrect (i.e. you are feeding a buffer larger than the one allocated as input buffer in your model)

  2. The data type you are feeding the data as is not correct. (i.e. the model expects uint8 but you are feeding a buffer filled with float values).

If you converted the model via the tflite_convert tool, sometimes the inputs can change types between float or int (depending on your arguments)

Is there a way for you to check if either one of these cases are happening?

1 Comment

Hi there! This pretty much sums it all. Firstly I used a different model that what was configured in the demo I was using (I used mobilenet-v2 instead of ssd-mobilenet-v1) which is why I might have faced this ambiguity. Secondly, the conversion was correct based on what model I used but there was a fine-tuning required to my model and hence this caused an input-data-bytes overflow. And lastly, your 2nd suggestion aptly sums up that my model clearly had compatibility issues with the TFLiteObjectDetectionAPI example on android and hence the buffer was overloaded.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.