3

So I have been trying to make a motion tracker to track a dog moving in a video (recorded Top-down) retrieve a cropped video showing the dog and ignore the rest of the background.

I tried first with object tracking using the available algorithms in opencv 3 (BOOSTING, MIL, KCF, TLD, MEDIANFLOW, GOTURN(returns an error, couldn't solve it yet)) from this link and I even tried a basic algorithm for motion tracking by subtracting the first frame, but none of them gives a good result. Link

I would prefer a code with a preset rectangle box that surrounds the area of motion once it is detected. Something like in this video

I'm not very familiar with OPENCV, but I believe single motion tracking is not supposed to be an issue since a lot of work has been done already. Should I consider other libraries/APIs or is there a better code/tutorial I can follow to get this done? my point is to use this later with neural network (which is why I'm trying to solve it using python/opencv)

Thanks for any help/advice

Edit:

I removed the previous code to make the post cleaner.

Also, based on the feedback I got and further research, I was able to modify some code to make it close to my wanted result. However, I still have an annoying problem with the tracking. It seems like the first frame affects the rest of the tracking since even after the dog moves, it keeps detecting its first location. I tried to limit the tracking to only 1 action using a flag, but the detection gets messed up. This is the code and pictures showing results:

jimport imutils
import time
import cv2

previousFrame = None

def searchForMovement(cnts, frame, min_area):

    text = "Undetected"

    flag = 0

    for c in cnts:
        # if the contour is too small, ignore it
        if cv2.contourArea(c) < min_area:
            continue

        #Use the flag to prevent the detection of other motions in the video
        if flag == 0:
            (x, y, w, h) = cv2.boundingRect(c)

            #print("x y w h")
            #print(x,y,w,h) 
            cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2)
            text = "Detected"
            flag = 1

    return frame, text

def trackMotion(ret,frame, gaussian_kernel, sensitivity_value, min_area):


    if ret:

        # Convert to grayscale and blur it for better frame difference
        gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
        gray = cv2.GaussianBlur(gray, (gaussian_kernel, gaussian_kernel), 0)



        global previousFrame

        if previousFrame is None:
            previousFrame = gray
            return frame, "Uninitialized", frame, frame



        frameDiff = cv2.absdiff(previousFrame, gray)
        thresh = cv2.threshold(frameDiff, sensitivity_value, 255, cv2.THRESH_BINARY)[1]

        thresh = cv2.dilate(thresh, None, iterations=2)
        _, cnts, _ = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

        frame, text = searchForMovement(cnts, frame, min_area)
        #previousFrame = gray

    return frame, text, thresh, frameDiff




if __name__ == '__main__':

    video = "Track.avi"
    video0 = "Track.mp4"
    video1= "Ntest1.avi"
    video2= "Ntest2.avi"

    camera = cv2.VideoCapture(video1)
    time.sleep(0.25)
    min_area = 5000 #int(sys.argv[1])

    cv2.namedWindow("Security Camera Feed")


    while camera.isOpened():

        gaussian_kernel = 27
        sensitivity_value = 5
        min_area = 2500

        ret, frame = camera.read()

        #Check if the next camera read is not null
        if ret:
            frame, text, thresh, frameDiff = trackMotion(ret,frame, gaussian_kernel, sensitivity_value, min_area)

        else:
            print("Video Finished")
            break


        cv2.namedWindow('Thresh',cv2.WINDOW_NORMAL)
        cv2.namedWindow('Frame Difference',cv2.WINDOW_NORMAL)
        cv2.namedWindow('Security Camera Feed',cv2.WINDOW_NORMAL)

        cv2.resizeWindow('Thresh', 800,600)
        cv2.resizeWindow('Frame Difference', 800,600)
        cv2.resizeWindow('Security Camera Feed', 800,600)
      # uncomment to see the tresh and framedifference displays                  
        cv2.imshow("Thresh", thresh)
        cv2.imshow("Frame Difference", frameDiff)



        cv2.putText(frame, text, (10, 20), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2)
        cv2.imshow("Security Camera Feed", frame)

        key = cv2.waitKey(3) & 0xFF
        if key == 27 or key == ord('q'):
            print("Bye")
            break

    camera.release()
cv2.destroyAllWindows()

This picture shows how the very first frame is still affecting the frame difference results, which forces the box to cover area with no motion.

Result showing the frame difference and video display

This one shows a case when motion is ignored an no-longer existing motion (frame difference from the second and first frames of the video) being falsely detected. When I allow multiple tracking it tracks both, which is still wrong since it detects an empty area.

enter image description here

Does anyone have an idea where the code is wrong or lacking ? I keep trying but cannot get it to work properly.

Thank you in advance !!

10
  • Do not just put the link, where is your tried code? Commented Jan 4, 2018 at 5:50
  • @Silencer I added that in the edit. Thanks for the comment Commented Jan 4, 2018 at 6:50
  • I think you should first identify the problem correctly and then try solutions. You want to first detect motion... and maybe track this object? or maybe only detect motion on each step? The first algorithms you mention are for tracking only, not for detection, that is why you need the ROI (this is your "object" to track). Also, what happens if you more than 1 object moving? I would recommend to first try to detect motion correctly, you can try something like this Commented Jan 4, 2018 at 7:59
  • @api55 Thank you for your comment. I am trying to follow the lead of your recommendation and once I get some results I will edit and mention it. Concerning your questions, it's as you said, detecting the motion and tracking that object. In my scenario, there is a dog inside a room and I want to track it (with a boundary box). So basically, dog moves -> motion is detected -> one boundary box is created and keeps tracking it (ignoring any other motion in the video). Commented Jan 6, 2018 at 8:41
  • 1
    @Lewis I didn't really get satisfying results with this kind of method and if your background is not static it will be even more complicated. I ended up using YOLO for object detection to perform tracking. Commented Oct 30, 2018 at 1:27

1 Answer 1

1

To include motion detection I have created generic components on NPM Registry and docker hub This detects the motion on client web cam( React app) and uses server in Python based on open CV so Client just captures web cam images and server analyses these images using OPENCV to determine if there is a motion or not client can specify a call back function which server calls each time there is a motion Server is just a docker image which you can pull and run and specify its URL to client

NPM Registry(Client)

Registry Link:

https://www.npmjs.com/settings/kunalpimparkhede/packages

Command

npm install motion-detector-client

Docker Image (Server)

Link

https://hub.docker.com/r/kunalpimparkhede/motiondetectorwebcam

Command

docker pull kunalpimparkhede/motiondetectorwebcam

You just need to write following code to have motion detection

Usage:

import MotionDetectingClient from './MotionDetectingClient';

<MotionDetectingClient server="http://0.0.0.0:8080" callback={handleMovement}/>

function handleMovement(pixels) 
{
console.log("Movement By Pixel="+pixels)
}

On server side : just start the docker server on port 8080:

docker run --name motion-detector-server-app -P 8080:5000 motion-detector-server-app
Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.