I can get frames from my webcam using OpenCV in Python. The camshift example is close to what I want, but I don't want human intervention to define the object. I want to get the center point of the total pixels that have changed over the course of several frame, i.e. the center of the moving object.
4 Answers
I've got some working code translated from the C version of code found in the blog post Motion Detection using OpenCV:
#!/usr/bin/env python
import cv
class Target:
def __init__(self):
self.capture = cv.CaptureFromCAM(0)
cv.NamedWindow("Target", 1)
def run(self):
# Capture first frame to get size
frame = cv.QueryFrame(self.capture)
frame_size = cv.GetSize(frame)
color_image = cv.CreateImage(cv.GetSize(frame), 8, 3)
grey_image = cv.CreateImage(cv.GetSize(frame), cv.IPL_DEPTH_8U, 1)
moving_average = cv.CreateImage(cv.GetSize(frame), cv.IPL_DEPTH_32F, 3)
first = True
while True:
closest_to_left = cv.GetSize(frame)[0]
closest_to_right = cv.GetSize(frame)[1]
color_image = cv.QueryFrame(self.capture)
# Smooth to get rid of false positives
cv.Smooth(color_image, color_image, cv.CV_GAUSSIAN, 3, 0)
if first:
difference = cv.CloneImage(color_image)
temp = cv.CloneImage(color_image)
cv.ConvertScale(color_image, moving_average, 1.0, 0.0)
first = False
else:
cv.RunningAvg(color_image, moving_average, 0.020, None)
# Convert the scale of the moving average.
cv.ConvertScale(moving_average, temp, 1.0, 0.0)
# Minus the current frame from the moving average.
cv.AbsDiff(color_image, temp, difference)
# Convert the image to grayscale.
cv.CvtColor(difference, grey_image, cv.CV_RGB2GRAY)
# Convert the image to black and white.
cv.Threshold(grey_image, grey_image, 70, 255, cv.CV_THRESH_BINARY)
# Dilate and erode to get people blobs
cv.Dilate(grey_image, grey_image, None, 18)
cv.Erode(grey_image, grey_image, None, 10)
storage = cv.CreateMemStorage(0)
contour = cv.FindContours(grey_image, storage, cv.CV_RETR_CCOMP, cv.CV_CHAIN_APPROX_SIMPLE)
points = []
while contour:
bound_rect = cv.BoundingRect(list(contour))
contour = contour.h_next()
pt1 = (bound_rect[0], bound_rect[1])
pt2 = (bound_rect[0] + bound_rect[2], bound_rect[1] + bound_rect[3])
points.append(pt1)
points.append(pt2)
cv.Rectangle(color_image, pt1, pt2, cv.CV_RGB(255,0,0), 1)
if len(points):
center_point = reduce(lambda a, b: ((a[0] + b[0]) / 2, (a[1] + b[1]) / 2), points)
cv.Circle(color_image, center_point, 40, cv.CV_RGB(255, 255, 255), 1)
cv.Circle(color_image, center_point, 30, cv.CV_RGB(255, 100, 0), 1)
cv.Circle(color_image, center_point, 20, cv.CV_RGB(255, 255, 255), 1)
cv.Circle(color_image, center_point, 10, cv.CV_RGB(255, 100, 0), 1)
cv.ShowImage("Target", color_image)
# Listen for ESC key
c = cv.WaitKey(7) % 0x100
if c == 27:
break
if __name__=="__main__":
t = Target()
t.run()
5 Comments
Frank
Thank you for your code. It works, and it can detect all the moving objects. However, it can not tracking the moving object. Is there any better way of tracking moving object? I am thinking to calculate the center of each contour, can compare the position changed between 2 frames, but the hard part is, if there are many contours in a frame, and there are very close, it's hard to tell for one contour, which is next contour in next frame.
Matt Williamson
It's been a while since I used OpenCV, but I recall using a demo included with the source, where you drag a square around an object with the mouse to track. It grabs the selection's color histogram and looks for that. I believe it was this piece of code: code.ros.org/trac/opencv/browser/trunk/opencv/samples/python/…
PrestonDocks
Is it possible to use this script and use the color_image as a mjpeg network stream at the same time. I want to be able to monitor the camera on the PC it is connected to and at the same time view the stream from my Android Phone.
Matt Williamson
I haven't tried it, but a cursory google search turns up stuff like ariandy1.wordpress.com/2013/04/07/…
kramer65
This code doesn't work for OpenCV version 3.2. I tried messing around with it to make it work, but I'm having a lot of trouble with it. Do you maybe have an updated version of this code available which you can share with us?
See the forum post Motion tracking using OpenCV.
I believe you are capable of reading and translating the source code to Python, right?
2 Comments
Matt Williamson
I'll give it a try and let you know.
Matt Williamson
I converted it to Python, but I'm afraid I'm getting the same points every time after calling CalcOpticalFlowPyrLK. Any ideas? Here's my code: friendpaste.com/7lM9Cmiyif1fIVwrgKBJnG
if faces:
for ((x, y, w, h), n) in faces:
pt1 = (int(x * image_scale), int(y * image_scale))
pt2 = (int((x + w) * image_scale), int((y + h) * image_scale))
ptcx=((pt1[0]+pt2[0])/2)/128
ptcy=((pt1[1]+pt2[1])/2)/96
cv.Rectangle(gray, pt1, pt2, cv.RGB(255, 0, 0), 3, 8, 0)
print ptcx;
print ptcy;
b=('S'+str(ptcx)+str(ptcy));
This is the part of the code I tried to get the center of the moving object when tracked using a rectangular boundary.
Comments
This following link tracks the moving vehicles as well as counting them. It is based on OpenCV and is written in Python 2.7.
OpenCV and Python