I have a jpg picture of a face, I need to access the picture pixel by pixel (know what value is at each pixel), and use some sort of DFS to change background color.
image = Image.open("pic.jpg")
image = np.array(image)
First of all, why is the shape of the array (473, 354, 3)? It doesn't make sense to me.
When I do
plt.imshow(image.reshape(473, -1))
plt.show()
I get a picture that looks like the following, which consists of only red, blue and yellow colors (and a mixture of the three?)
This means that the values in the array are not what I can reliably use to make my edge detection decisions.
Why and what should I do?
I want the pixel values to reflect the true color of the original image, not like above.
The background in the actual picture is kinda white, and I want them and all other pixel values to stay that way, so I can implement my algorithm.
