7

I need to track cars on the road from top-view video.

My application contain two main parts:

  1. Detecting cars on the frame (Tensorflow trained network)
  2. Tracking detected cars (opencv trackers)

I have troubles with opencv trackers. Initially i tried to different trackers, but only MOSSE is fast enough. This tracker works almost perfect for case with straight road, but i faced problems with rotating cars. This situation appears on crossroads.

As i understood, bounding box of rotated object is bigger that bbox of horizontal or vertical object. As result bbox contains big part of static background and the tracker lose target object.

Are there any alternative trackers which can track contours (not bounding boxes)? Can i adjust quality of existing opencv trackers results by any settings or by adjusting picture?

Schema: Example

Real image: Real image

6
  • 2
    What you want is the bounding rotated box. However, I am not sure how to use that with opencv trackers yet. Commented Nov 15, 2019 at 12:34
  • Perhaps, SIFT-based approach would be better 1. Commented Nov 15, 2019 at 13:07
  • @SlawomirOrlowski Objects are small and have low resolution, so i had errors with tracking using SIFT Commented Nov 18, 2019 at 6:41
  • @Max: OK. My next shot would be GOTURN: Generic Object Tracking Using Regression Networks. Commented Nov 18, 2019 at 10:42
  • do you detect in each frame? Does the detector have the same problem (axis aligned bounding box)? Is the camera stationary? Should be size of the (rotated) box be constant for one vehicle? Is the actual box relevant for you? Maybe it's enough to regard the center point of the (too big) box? Commented Nov 20, 2019 at 14:14

5 Answers 5

2

If your camera is stationary the following scenario is feasible:

  1. use ‌background subtraction methods to separate background image from foreground blobs.
  2. Improve the foreground results using morphological operations.
  3. Detect car blobs and remove other blobs.
  4. Track foreground blobs in video i.e. binary track (simply use this or even apply KF).
Sign up to request clarification or add additional context in comments.

Comments

2

A very basic but effective approach in this scenario might be to track the center coordinates of the bounding box, if the center coordinates only change along one axis (with a small tolerance for either axis), its a linear motion (not a rotation). If both x and y change, the car is moving in the roundabout.

This only has the weakness that it will detect diagonal motion, but since you are looking at a centered roundabout, that shouldn't be an issue.

It will also be very efficient memory-wise.

Comments

1

You should use PCA method, which can calculate the orientation of an detected object and which way it is facing. You can change the threshold of detection to select objects more like the cars (based upon shape and colour - a HSV conversion which in your case is red) in your picture.

Link to an introduction to Principal Component Analysis (PCA)

2 Comments

objects are detected only on first frame. After detection they should be tracked using tracker and input bbox. I added real image (cars can have any colors)
After, an object is detected using MOOSE, apply a mask and then apply the PCA method per frame. The mask shall blank out all of the additional parts of the frame leaving only the cars in the image. Thus allowing the PCA method to process the image and find the orientation of the cars.
1

Method 1 :

      - Detect bounding boxes and subtract the background to get blobs rotated rectangles.

Method 2 :

      - implement your own version of detector with  rotated boxes.

Method 3 :

      - Use segmentation instead  ... Unet for example. 

Comments

1

There are no other trackers than the ones found in the library.
Your best bet is to filter the image and use findcontours. Optical flow and background subtraction will help with this. You can combine optical flow with your car detector to rule out false positives. https://docs.opencv.org/3.4/d4/dee/tutorial_optical_flow.html https://docs.opencv.org/3.4/d1/dc5/tutorial_background_subtraction.html

3 Comments

1. Optical flow doesn't work properly for me, because objects are small and points which returned from goodFeaturesToTrack() are not good :/
2. I already used background subtraction but it doesn't work correct with static object
you can pass in custom features as well. Background subtraction needs multiple frames to detect a background.This should be no problem, if you have a camera mounted above

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.