5

I'm trying to determine if an image is squared(pixelated).

I've heard of 2D fourrier transform with numpy or scipy but it is a bit complicated.

The goal is to determine an amount of squared zone due to bad compression like this (img a):

2
  • 2
    This is just off the top of my head, so it's just a comment. Do you have access to the original, uncompressed image? If so you might try doing a color-count of the 2 images. If one is substantially lower than the other you almost certainly have pixelation / posterization happening. Commented Oct 17, 2012 at 20:24
  • I do not have access to original image sadly Commented Oct 17, 2012 at 20:33

2 Answers 2

2

I have no idea if this would work - but, something you could try is to get the nearest neighbors around a pixel. The pixellated squares will be a visible jump in RGB values around a region.

You can find the nearest neighbors for every pixel in an image with something like

def get_neighbors(x,y, img):
    ops = [-1, 0, +1]
    pixels = []
    for opy in ops:
        for opx in ops:
            try:
                pixels.append(img[x+opx][y+opy])
            except:
                pass
    return pixels

This will give you the nearest pixels in a region of your source image.

To use it, you'd do something like

def detect_pixellated(fp):
    img = misc.imread(fp)
    width, height = np.shape(img)[0:2]

    # Pixel change to detect edge
    threshold = 20

    for x in range(width):
        for y in range(height):
            neighbors = get_neighbors(x, y, img)

            # Neighbors come in this order:
            #  6   7   8
            #  3   4   5
            #  0   1   2

            center = neighbor[4]
            del neighbor[4]

            for neighbor in neighbors:
                diffs = map(operator.abs, map(operator.sub, neighbor, center))
                possibleEdge = all(diff > threshold for diff in diffs)

After further thought though, use OpenCV and do edge detection and get contour sizes. That would be significantly easier and more robust.

Sign up to request clarification or add additional context in comments.

2 Comments

the last paragraph "... use OpenCV and do edge detection and get contour sizes..." did help alot! Thanks
I'm sorry for the very late question but I found this post recently and I have the same exact problem. When you talk about contour sizes, do you mean like, using the contour area as it's mentioned in here?
1

If you scan through lines of it it's abit easier because then you deal with linear graphs instead of 2d image graphs, which is always simpler.

Solution:

scan a line across the pixels, put the line in an array if it is faster to access for computations, and then run algorithms on the line(s) to determine the blockiness:

1/ run through every pixel in your line and compare it to the previous pixel by substracting the value between the two pixels. make an array of previous pixel values. if large jumps in pixel values are at regular invervals, it's blocky. if there are large jumps in values combined with small jumps in values, it's blocky... you can assume that if there are many equal pixel differences, it's blocky, especially if you repeat the analysis twice at 2 and 4 neighbour pixel intervals, and on multiple lines.

you can also make graphs of pixel differences between pixels 3-5-10 pixels apart, to have additional information on gradient changes of sampled lines of pics. if the ratio of pixel differences of neighbour pixels and 5th neighbour pixels is similar, it also indicates unsmooth colors.

there can be many algorythms, including fast fourrier on a linear graph, same as audio, that you would use on line(s) from the pic, that is simpler than a 2d image algorythm.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.