I have a 3 channel numpy array, ie an image and I want to mask out some areas then calculate the mean on the unmasked areas. When I go to convert my numpy array to a masked numpy array I always get the following error:
raise MaskError(msg % (nd, nm))
numpy.ma.core.MaskError: Mask and data not compatible: data size is 325080, mask size is 108360.
My array (image) shape is: (301, 360, 3) for reference. I create my mask by creating a duplicate array of zeros then drawing a polygon shape of 1's (True) on the mask.
My code is:
mask = np.zeros((src.shape[0], src.shape[1], 1), dtype='uint8')
cv2.drawContours(mask, [np.array(poly)], -1, (1,), -1)
msrc = np.ma.array(src, mask=mask, dtype='uint8') # error on this line
mean = np.ma.mean(msrc)
What am I doing wrong and how can I fix it to successfully create a masked array in numpy?
srcis (301,360,3).msrc = np.ma.array(src, mask=np.dstack((mask,mask,mask)), dtype='uint8')numpyis an arbitrary array manipulation library. It has no concept of channels or pixels or any such thing. If you want to mask all of the "channels", you need to translate that from image-processing-speak tonumpy-speak. "Masking all of the channels" == "Broadcasting a mask across the 2nd axis". Sometimesnumpycan handle broadcasting for you. This is not one of those times. Whennumpydoes not broadcast for you, you simply need to duplicate your mask