I'm running some simulations that involve repeatedly comparing values in 2D Numpy arrays with their 'neighbours'; eg. the value at indicie location (y,x) is compared to the value at indicie location (y-1,x) from the same array.
At the moment I am using functions like this:
# example of the typical size of the arrays
my_array = np.ndarray((500,500))
shapey, shapex = my_array.shape
Yshape = (1, shapex)
Yzeros = np.zeros((1, shapex))
def syf(A, E=True):
if E == True:
return np.concatenate((A[1:], A[-1].reshape(Yshape)), axis=0)
else:
return np.concatenate((A[1:], Yzeros), axis=0)
shifted_array = syf(my_array)
difference_in_y = shifted_array - my_array
This has the option to use either the edge values or zeros for comparison at the edge of the array. The functions can also do it in either direction in either axis.
Does anybody have any suggestions for a faster way to do this?
I've tried np.roll (much slower) and this:
yf = np.ix_(range(shapey)[1:] + [shapey,], range(shapex))
shifted_array = my_array[yf]
which is a little slower.
These functions are called ~200 times a second in a program that takes 10 hours to run so any small speedups are more that welcome!
Thanks.
EDIT:
So if the same differentiation method is required every time the shift function is called then Divakars method below seems to offer a minor speedup, however if just a shifted array is required both that method and the one I use above seem to be equal speed.
scipy.ndimage.convolve1d, but for this case (very short filter) it's actually ~2x slower than your current approach.roll, it generates an index like youryf, and then usestake.rollusesnp.ix_indexing then there probably isn't a faster alternative though