I've got a 2-row array called C like this:
from numpy import *
A = [1,2,3,4,5]
B = [50,40,30,20,10]
C = vstack((A,B))
I want to take all the columns in C where the value in the first row falls between i and i+2, and average them. I can do this with just A no problem:
i = 0
A_avg = []
while(i<6):
selection = A[logical_and(A >= i, A < i+2)]
A_avg.append(mean(selection))
i += 2
then A_avg is:
[1.0,2.5,4.5]
I want to carry out the same process with my two-row array C, but I want to take the average of each row separately, while doing it in a way that's dictated by the first row. For example, for C, I want to end up with a 2 x 3 array that looks like:
[[1.0,2.5,4.5],
[50,35,15]]
Where the first row is A averaged in blocks between i and i+2 as before, and the second row is B averaged in the same blocks as A, regardless of the values it has. So the first entry is unchanged, the next two get averaged together, and the next two get averaged together, for each row separately. Anyone know of a clever way to do this? Many thanks!
A? Is it truly an array of consecutive integers or something very much more general, with different intervals, and unsorted?Ais a big1 x 96100array of steadily increasing floats, that increases more slowly as you go down the array.Bis a1 x 96100array of unsorted very small numbers (ie, 1.2367*10**(-22)). Would that be cause for the matrix multiplication method?B. In this case it seems silly to build a matrix with information you could have applied to the vectors directly. As a matter of fact, this observation is probably general. The result will always be the same, no matter how small or big your numbers are. My initial question was more about data dimensions