0

I'm trying to create a mask. I have database of images similar like this image.

INPUT IMAGE

CODE

import cv2
import numpy as np
img = cv2.imread('sample1.png', cv2.IMREAD_UNCHANGED)

gray = cv2.cvtColor(img,cv2.COLOR_RGB2GRAY)
#img_ = cv2.threshold(gray,100,225,cv2.THRESH_BINARY)
edges = cv2.Canny(gray, 250, 250)
cv2.imwrite('output.png',edges)

OUTPUT

How can I remove inner bonder and fill with white.

Result I Want

1 Answer 1

1

Well, there are many ways to do that. All of them need some tuning, depending on your image. There is, for example, a floodfill function in opencv.

But the easiest is probably to use some mathematical morphology and then connected component. Because from the connected component, it is easier to adjust result if needed.

We can start by having a binary version of your edges

binedge=(edges>0).astype(np.uint8)

binary version (×255) of edges

Once this is done, since there are "holes" in it, we need to fill those holes, so that the edge strictly separate inside from outside. This can be done by a dilatation

ker=np.ones((3,3))
fatedge=cv2.dilate(binedge, ker)

same, but fatter

Then, we want to find the inside. That is not easy, because there might be many parts in that inside. So the easiest way is probably to find the outside and revert it. Tho there could also be several outside parts, if character touch the border in different places. So, let's start to find all connected black parts of this picture.

n,comp=cv2.connectedComponents((fatedge==0).astype(np.uint8))

comp here is an image whose values is the index of the connected component. Shown here with random colors for each index. Connected components. Each component has a different index — color

Let's assume that outside is connected and that (0,0) is in it (it is almost always the case. And it is here. If not, you'll have to find more complex criteria. Such as "the biggest component". Or even to merge different parts). The component we are interested in is the one that contains (0,0). That is the pixels of comp that have the same value as comp[0,0]. And in fact, what we are interested in is the opposite of that: what is inside. We compute outside only because it is easier. Inside is what is not inside, that is pixels that are != comp[0,0].

filled=(comp!=comp[0,0]).astype(np.uint8)

Almost there

Last stage (not really necessary from an aesthetics point of view. But strictly speaking, it is needed) : since we have dilated the edges at the beginning, this picture is a few pixels bigger than it should. We can erode it back now that we have what we want

output=cv2.erode(filled, ker)*255
cv2.imwrite('output.png',output)

Result

So, all together

import cv2
import numpy as np
img = cv2.imread('Downloads/93Lwd.png', cv2.IMREAD_UNCHANGED)

gray = cv2.cvtColor(img,cv2.COLOR_RGB2GRAY)
#img_ = cv2.threshold(gray,100,225,cv2.THRESH_BINARY)
edges = cv2.Canny(gray, 250, 250)
# Binarize edges
binedge=(edges>0).astype(np.uint8)
# Removing edges too close from left and right borders
binedge[:,:20]=0
binedge[:,-20:]=0
# Fatten them so that there is no hole
ker=np.ones((3,3))
fatedge=cv2.dilate(binedge, ker)
# Find connected black areas
n,comp=cv2.connectedComponents((fatedge==0).astype(np.uint8))
# comp is an image whose each value is the index of the connected component
# Assuming that point (0,0) is in the border, not inside the character, border is where comp is == comp[0,0]
# So character is where it is not
# Or, variant from new image: considering "outside" any part that touches one of the left, right, or top border
# Note: that is redundant with previous 0ing of left and right borders
# Set of all components touching left, right or top border
listOutside=set(comp[:,0]).union(comp[:,-1]).union(comp[0,:])
if 0 in listOutside: listOutside.remove(0) # 0 are the lines, that is what is False in fatedge==0
filled=(~np.isin(comp, list(listOutside))).astype(np.uint8) # isin need array or list, not set

# Just to be extra accurate, since we had dilated edges, with can now erode result
output=cv2.erode(filled, ker)
cv2.imwrite('output.png',output*255)
Sign up to request clarification or add additional context in comments.

9 Comments

sir @chrslg it work perfectly with this image but there was whole database. and iys don't work on whole those image like I tryed to test on this i.sstatic.net/BomOo.png img but it give output completely black. I have something in mind but don't know how to apply. other then apply filling inside fill white background and invert it. or something like image segmentation.
Well, that part is on you. I acted only after edge detection. Your edge detection contain rounded line in corners. The algorithm cannot guess why you would ignore those and not the others.
Plus, I don't know your database, so I can't really suggest an universal criteria for your database (there isn't a universal one for all images, for sure).
So you could remove from your edge detection anything to close to borders, or corner. That is something you have to decide from your images. You could also, instead of keeping all pixels that are different from the (0,0) point (which doesn't give a blank image, btw. You still have a black rounded corner...), keep all pixels that are different from anything in the left, right and top borders. But that also is an arbitrary decision, based on the 2 images I have seen so far.
There can't be a magic criteria that I could give you without seeing all images. A computer has no way to guess that you want to take into account borders of the monkey, but that rounded borders that you include in your edge detect are not part of it. Even deep learning futuristic AI couldn't: they can't guess what you want.
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.