1

I am trying to isolate moving objects from a moving camera so that I can later apply some further processing algorithms to them, but I seem to have become a little stuck.

So far I am working with OpenCV and getting sparse optical flow from PyrLKOpticalFlow. The general idea that I was working from was finding the features that were moving differently from the background points in the image, then finding clusters of these differently-moving features to be counted as moving objects for further tracking/processing. My problem is that while I have found a few academic papers that used a strategy like this, thus far I haven't been able to find a simple way to accomplish it for myself.

What would be a good method for using this optical flow data to detect moving objects from a moving camera? Is this even the best approach to be taking, or is there some simpler approach that I may be overlooking?

2 Answers 2

4

I managed to find a method that more or less does what I want in OpenCV.

After finding the sparse optical flow points between two consecutive images with GoodFeaturesToTrackDetector and PyrLKOpticalFlow (giving me prevPts and nextPts), I use findHomography with RANSAC to estimate the motion due to camera movement while excluding the outliers due to independently moving objects. I then used perspectiveTransform to warp the prevPts to account for the camera motion (giving me warpedPts). I can then compare the warpedPts to the nextPts in order to find moving objects.

The end result is that even with the camera moving there is not much change between a point in warpedPts and nextPts if the object is stationary, while there is a significant change when the tracked points are on a moving object. From there is is just a matter of grouping the moving points on the basis of proximity and similarity of movement.

Sign up to request clarification or add additional context in comments.

Comments

0

First of all - as I remember from theory - optical flow actually works best with moving camera (not with still scene & moving objects). It makes sense because it assumes same flow within neighbourhood pixels. It would be a great starting point for you to read about the lucas kanade method to understand what is going on.

Second, your problem is not about to track some features, but to detect some moving objects in the scene. For that, instead of a sparse set, you may need to go for dense optical flow. If your scene was still, background subtraction would be a great possibility, too.

3 Comments

My problem is that for this application we need to keep everything running as close to real-time as possible (15 FPS for the cameras we are using). While using dense optical flow would be nice, between this and the other processing that I need to run for each frame there just isn't enough time for it even when I am doing most of the processing on the GPU.
so you need to go for the LK method, gridwise pixel selection and approximately find the moving parts. what about dynamic background subtraction ? i m not sure if its implemented in opencv though
what's dynamic background subtraction here ?

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.