In the code below, I have a simple for-loop that I'd like to replace with, hopefully, a faster vectorized numpy operation.
import numpy as np
b = np.array([9,8100,-60,7], dtype=np.float64)
a = np.array([584,-11,23,79,1001,0,-19], dtype=np.float64)
m = 3
n = b.shape[0]
l = n-m+1
k = a.shape[0]-m+1
QT = np.array([-85224., 181461., 580047., 8108811., 10149.])
QT_first = QT.copy()
out = [None] * l
for i in range(1, l):
QT[1:] = QT[:k-1] - b[i-1]*a[:k-1] + b[i-1+m]*a[-(k-1):]
QT[0] = QT_first[i]
# Update: Use this QT to do something with the ith element of array x
# As i updates in each iteration, QT changes
out[i] = np.argmin((QT + b_mean[i] * m) / (b_stddev[i] * m * a_stddev))
return out
I need a solution that is general enough to take on much longer arrays. Note that QT depends on the m and the length of b and will always be provided.
How can I replace the for-loop with a numpy vectorized operation so that it is faster?
Update
I modified the original code to more clearly demonstrate why a convolution does not solve my problem. Convolve only gives me the final QT but I actually need to use the intermediate QT values for another calculation before updating it for the iteration of the for-loop.