10

I want to do something like this:

a =  # multi-dimensional numpy array
ares = # multi-dim array, same shape as a
a.shape
>>> (45, 72, 37, 24)  # the relevant point is that all dimension are different
v = # 1D numpy array, i.e. a vector
v.shape
>>> (37)  # note that v has the same length as the 3rd dimension of a
for i in range(37):
    ares[:,:,i,:] = a[:,:,i,:]*v[i]

I'm thinking there has to be a more compact way to do this with numpy, but I haven't figured it out. I guess I could replicate v and then calculate a*v, but I am guessing there is something better than that too. So I need to do element wise multiplication "over a given axis", so to speak. Anyone know how I can do this? Thanks. (BTW, I did find a close duplicate question, but because of the nature of the OP's particular problem there, the discussion was very short and got tracked into other issues.)

4 Answers 4

7

Here is one more:

b = a * v.reshape(-1, 1)

IMHO, this is more readable than transpose, einsum and maybe even v[:, None], but pick the one that suits your style.

Sign up to request clarification or add additional context in comments.

Comments

5

You can automatically broadcast the vector against the outermost axis of an array. So, you can transpose the array to swap the axis you want to the outside, multiply, then transpose it back:

ares = (a.transpose(0,1,3,2) * v).transpose(0,1,3,2)

5 Comments

+1 This is probably the better way to implement a function with an axis argument. I would probably let swap_axes or roll_axis figure out the whole permutation of the axis, but either one ends up calling transpose anyway.
@Jaime: I tend to only use swapaxes when I have dynamic axes in a variable, but I think that's really just prejudice on my part; np.swapaxes(a,3,2) isn't any less readable or explicit than a.transpose(0,1,3,2). As for rollaxis, it's a little less obvious how to reverse that one after you're done, but still not too hard to follow.
My own prejudice is that if I know all the axes beforehand, I would go with something like DSM's solution above. And if I don't know them, i.e. if I have to generate the whole thing dynamically, it is a pain to write out the whole transposition of the axes, so I'd use one of the other functions.
Oh, and another comment! Rather than transposing back after the operation, which leaves you with a non-contiguous array, I would do something like ares = np.empty_like(a); ares_view = ares.transpose(0, 1, 3, 2); ares_view[:] = a.transpose(0, 1, 3, 2) * v, which should leave the same result in ares, only contiguous.
Thanks to everyone for the answers. I got many more than I expected. I need a solution for arrays with a variety of dimensions, not just 4, and this one seems to fit that bill the best. Thanks to all, though!
5

You can do this with Einstein summation notation using numpy's einsum function:

ares = np.einsum('ijkl,k->ijkl', a, v)

1 Comment

I like this. Very elegant. But it looks like einsum has a bit of a learning curve, and I need to get my project developed post haste. Thanks much. Will keep it in mind for the future.
4

I tend to do something like

b = a * v[None, None, :, None]

where I think I'm officially supposed to write np.newaxis instead of None.

For example:

>>> import numpy as np
>>> a0 = np.random.random((45,72,37,24))
>>> a = a0.copy()
>>> v = np.random.random(37)
>>> for i in range(len(v)):
...     a[:,:,i,:] *= v[i]
...     
>>> b = a0 * v[None,None,:,None]
>>> 
>>> np.allclose(a,b)
True

1 Comment

Actually a * v[:, None] is sufficient since it matches from the last axis forward.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.