2
img = cv2.imread('/home/user/Documents/workspace/ImageProcessing/img.JPG');
image = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)

#red, blue, yellow, and gray
boundaries = [
([17, 15, 100], [50, 56, 200]),
([86, 31, 4], [220, 88, 50]),
([25, 146, 190], [62, 174, 250]),
([103, 86, 65], [145, 133, 128])]


for i, (lower, upper) in enumerate(boundaries):

    lower = np.array(lower, dtype="uint8")
    upper = np.array(upper, dtype="uint8")

    mask = cv2.inRange(image, lower, upper)
    output = cv2.bitwise_and(image, image, mask=mask)

    cv2.imwrite(str(i) + 'image.jpg', output)

I am trying to isolate the colors red, blue, yellow and gray from an image (seperately). It is working so far, however the "sensitivity" is way to low. The algorithm is missing some smaller color spots. Is there a way to calibrate this? Thanks!

edit: input image input

output
output1 output2 output3 output4

1
  • It would help us to understand the problem better if you can also add your input/output images. Commented Jan 16, 2017 at 20:25

1 Answer 1

4

inRange function does not have a built-in sensitivity. It only compares the values. inRange(x,10,20) will only give you {10,11,...,20}.

One way to overcome this is to introduce your own sensitivity measure.

s = 5 # for example sensitivity=5/256 color values in range [0,255]

for i, (lower, upper) in enumerate(boundaries):

    lower = np.array([color-s if color-s>-1 else 0 for color in lower], dtype="uint8")
    upper = np.array([color+s if color+s<256 else 255 for color in upper], dtype="uint8")

    mask = cv2.inRange(image, lower, upper)
    output = cv2.bitwise_and(image, image, mask=mask)

    cv2.imwrite(str(i) + 'image.jpg', output)

Or you can smooth the image beforehand to get rid of such noisy pixels. That will make the pixel values closer to each other, so that the ones out of the boundary might get values closer to the range.

Sign up to request clarification or add additional context in comments.

3 Comments

but i tried everything from sensitivity 0.01 to 200 and it doesn't get any better
everything after sensitivity = 20 is black
Yes, because your boundaries go beyond 0 and/or 255 if you have high sensitivity. You can check if sensitivity is distorting the boundaries beyond the actual possible pixel values by adding conditionals. I am updating the code for that.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.