I have computed a segmentation of an image where every superpixel (region) is defined by the value of an entry in a 2D array of the same size as the image. I am trying to obtain a list of indices for each of the regions in order to perform per-region operations at a later time. Here is my current code:
index_list = []
for i in range(num_superpixels):
indices = np.where(superpixels == i)
index_list.append(indices)
The following is a minimal example with a 3x3 input containing 3 regions. In the practice I am working with 500-1000 superpixels obtained from 640x480 images, and things get very slow.
>>> superpixels
array([[0, 0, 2],
[0, 0, 2],
[1, 1, 2]])
>>> index_list
[[array([0, 0, 1, 1]), array([0, 1, 0, 1])],
[array([2, 2]), array([0, 1])],
[array([0, 1, 2]), array([2, 2, 2])]]
Since each region is a continuous chunk (in the 2D image, but not in memory), using np.where in a loop is really inefficient - at every iteration it's traversing width*height entries to find a region of ~500 entries.
How do I speed this up?
superpixels? Are you calculating each region based on difference?