1

I am trying to find and separate all edges in an edge detected image using python OpenCV. The edges can be in a form of contour but they don't have to. I just want all connected edges pixels to be grouped together. So technically the algorithm may procedurally sound like this:

  1. For each edge pixel, find a neighbouring (connected) edge pixel and add it to a current subdivision of the image, until you can't find one anymore.
  2. Then move on to the next unchecked edge pixel and start a new subdivision and do 1) again.

I have looked through cv.findContours but the results wasn't satisfying, maybe because it was intended for contours (enclosed edges) rather than free-ended ones. Here are the results:

Original Edge Detected:

After Contour Processing:

I expected the five edges would each be grouped into its own subdivision of the image, but apparently the cv2.findContours function breaks 2 of the edges even further into subdivisions which I don't want.

Here is the code I used to save these 2 images:

    def contourForming(imgData):
      cv2.imshow('Edge', imgData)
      cv2.imwrite('EdgeOriginal.png', imgData)
      contours = cv2.findContours(imgData, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
      cv2.imshow('Contours', imgData)
      cv2.imwrite('AfterFindContour.png', imgData)
      cv2.waitKey(0)
      pass

There are restrictions to my implementation, however. I have to use Python 2.7 and OpenCV2. I cannot use any other revision or languages besides these. I say this because I know OpenCV 2 has a connectedComponent function using C++. I could have used that but the problem is, I cannot use it due to certain limitations.

So, any idea how I should approach the problem?

1
  • 1
    you could try to cv::dilate the edges before contour extraction but that might alter the result Commented Nov 23, 2015 at 9:40

1 Answer 1

2

Using findContours is the correct approach, you're simply doing it wrong.

Take a closer look to the documentation:

Note: Source image is modified by this function.

Your "After Contour Processing" image is in fact the garbage result from findContours. Because of this, if you want the original image to be intact after the call to findContours, it's common practice to pass a cloned image to the function.

The meaningful result of findContours is in contours. You need to draw them using drawContours, usually on a new image.

This is the result I get:

enter image description here

with the following C++ code:

#include <opencv2/opencv.hpp>
using namespace cv;

int main(int argc, char** argv)
{
    // Load the grayscale image
    Mat1b img = imread("path_to_image", IMREAD_GRAYSCALE);

    // Prepare the result image, 3 channel, same size as img, all black
    Mat3b res(img.rows, img.cols, Vec3b(0,0,0));

    // Call findContours
    vector<vector<Point>> contours;
    findContours(img.clone(), contours, RETR_EXTERNAL, CHAIN_APPROX_NONE);

    // Draw each contour with a random color
    for (int i = 0; i < contours.size(); ++i)
    {
        drawContours(res, contours, i, Scalar(rand() & 255, rand() & 255, rand() & 255));
    }

    // Show results
    imshow("Result", res);
    waitKey();

    return 0;
}

It should be fairly easy to port to Python (I'm sorry but I can't give you Python code, since I cannot test it). You can also have a look at the specific OpenCV - Python tutorial to check how to correctly use findContours and drawContours.

Sign up to request clarification or add additional context in comments.

1 Comment

Thanks! I think it works now. I didn't know the documentation for OpenCV 3.0.0 that you gave ( link ) works for OpenCV 2.7 too.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.