I'm trying to track motion with OpenCV in Python. If a pixel doesn't match the color of the last frame then it gets set to black, otherwise it is set to white if it's static. I got this working pretty good, but if I try to do this with every pixel one by one, performance takes a big hit and it runs too slow.
My code:
import numpy as np
import cv2
def distance(b1, g1, r1, b2, g2, r2):
return abs(b2 - b1) + abs(g2 - g1) + abs(r2 - r1)
pixelStep = 1
lastFrame = None
thresh = 100
cap = cv2.VideoCapture(0)
while(True):
flag, frame = cap.read()
frameInst = frame.copy()
height = np.size(frame, 0)
width = np.size(frame, 1)
if lastFrame != None:
for x in range(0, height, pixelStep):
for y in range(0, width, pixelStep):
b1 = lastFrame.item(x, y, 0)
g1 = lastFrame.item(x, y, 1)
r1 = lastFrame.item(x, y, 2)
b2 = frame.item(x, y, 0)
g2 = frame.item(x, y, 1)
r2 = frame.item(x, y, 2)
dist = distance(b2, g2, r2, b1, g1, r1)
colorValue = 255
if dist > thresh:
colorValue = 0 # Change to black if there's another change from pixel
frame.itemset(x, y, 0, colorValue)
frame.itemset(x, y, 1, colorValue)
frame.itemset(x, y, 2, colorValue)
lastFrame = frameInst
cv2.imshow('frame', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
If I change pixelStep to 3 it'll run fast and feel right. Am I doing this right or do I need to approach this in a different way?