1

Let's say I have the following numpy array, of 1's and 0's exclusively:

import numpy as np

example = np.array([0,1,1,0,1,0,0,1,1], dtype=np.uint8)

I want to group all elements into chunks of 3, and replace the chunks by a single value, based on a condition. Let's say I want [0,1,1] to become 5, and [0,1,0] to become 10. Thus the desired output would be:

[5,10,5]

All possible combinations of 1's and 0's in a chunk have a corresponding unique value that should replace the chunk. What's the fastest way to do this?

3
  • Maybe you can have a dict that maps the 8 possible values of 0s and 1s to the arbitrary numbers you want, and then you translate based on reading the array three indices at a time and getting the value from the dict Commented Jan 19, 2022 at 20:07
  • I could, but this would be too slow for my use case. I'm looking for a vectorized way to achieve this. Commented Jan 19, 2022 at 20:16
  • np.packbits does what you want Commented Jan 20, 2022 at 7:43

3 Answers 3

2

I suggest you reshape your array in to a 3 by something array. Now we can see each row as a binary number that's an index into a list of values that you want. You convert it to that number and index into the values.

arr = np.array([0,1,1,0,1,0,0,1,1], dtype=np.uint8).reshape(-1,3)

idx = 2**0*arr[:,0]+2**1*arr[:,1]+2**2*arr[:,2]

values = np.zeros(2**3)
values[0 *2**0+ 1 *2**1+ 1 *2**2] = 5
values[0 *2**0+ 1 *2**1+ 0 *2**2] = 10

values[idx]

this gives

array([ 5., 10.,  5.])

Or if you prefer to write the conversion more succinctly though a bit less elementary (thanks to @mozway for the idea):

def bin_vect_to_int(arr):
    bin_units = 2**np.arange(arr.shape[1])
    return np.dot(arr,bin_units)


arr = np.array([0,1,1,0,1,0,0,1,1,0,1,1], dtype=np.uint8).reshape(-1,3)
idx = binVecToInt(arr)

values = np.zeros(2**3)
values[bin_vect_to_int(np.array([[0,1,1]]))] = 5
values[bin_vect_to_int(np.array([[0,1,0]]))] = 10

values[idx]
Sign up to request clarification or add additional context in comments.

6 Comments

Nice (+1), simpler method to compute the binary: (example.reshape(3,-1)*2**np.arange(3)).sum(1)
@mozway well when you're already multiplying and summing up you might as well use np.dot no?
Yes very true, you can do np.dot(example.reshape(3,-1),2**np.arange(3)) ;)
@mozway well I am
I see that (sorry, small screen, on phone)
|
1

As the other answers indicated, you can start by reshaping your array (actually, you should probably generate it with the correct shape to begin with, but that's another issue):

example = np.array([0, 1, 1, 0, 1, 0, 0, 1, 1], dtype=np.uint8)
data = example.reshape(-1, 3)

Now running a custom python function over the array is going to be slow, but luckily numpy has your back here. You can use np.packbits to transform each row into a number directly:

data = np.packbits(data, axis=1, bitorder='little').ravel() # [6, 2, 6]

If you wanted 101 to map to 5 and 110 to map to 6, your job is done. Otherwise, you will need to come up with a mapping. Since you have three bits, you only need 8 numbers in the mapping array:

mapping = np.array([7, 4, 3, 8, 124, 1, 5, 0])

You can use data as an index directly into mapping. The output will have the type of mapping but the shape of data:

result = mapping[data]  # [5, 3, 5]

You can do this in one line:

mapping[np.packbits(example.reshape(-1, 3), axis=1, bitorder='little').ravel()]

Comments

1

You can use contiguous array view of shape(3, -1), find locations of unique occurences and replace them in these locations:

def view_ascontiguous(a): # a is array
    a = np.ascontiguousarray(a)
    void_dt = np.dtype((np.void, a.dtype.itemsize * a.shape[1]))
    return a.view(void_dt).ravel()

def replace(x, args, subs, viewer):
    u, inv = np.unique(viewer(x), return_inverse=True)
    idx = np.searchsorted(viewer(args), u)
    return subs[idx][inv]

>>> replace(x=np.array([1, 0, 1, 0, 0, 1, 1, 0, 1]).reshape(-1, 3),
        args=np.array([[0, 0, 0], [0, 0, 1], [0, 1, 0], [0, 1, 1], [1, 0, 0], [1, 0, 1], [1, 1, 0], [1, 1, 1]]),
        subs=np.array([ 5, 57, 58, 44, 67, 17, 77,  1]),
        viewer=view_ascontiguous)
array([17, 57, 17])

Note that here idx means locations of unique contiguous blocks of length N in a Power Set {0, 1}^N.

In case viewer(args) maps args to [0, 1, 2, 3, ...] inside np.searchsorted method, replacing it with np.arange(len(args)) helps to improve performance.

This algorithm could be also used in more general problem:


You are given array of dtype=np.uint8 of M*N values 0 & 1. You are also given a mapping between Power Set [0, 1]^N (all possible blocks of length N of 0 & 1) and some scalar values. Find an array of M values following these steps:

  • Split array you are given into M contiguous blocks of length N
  • Replace each block with scalar value using a mapping you are given

Now, fun part: you can use your own viewer. It is required to map an array you pass in args to any kind of ascending indices like so:

viewer=lambda arr: np.ravel_multi_index(arr.T, (2,2,2)) #0, 1, 2, 3, 4, 5, 6, 7
viewer=lambda arr: np.sum(arr * [4, 2, 1], axis=1) #0, 1, 2, 3, 4, 5, 6, 7
viewer=lambda arr: np.dot(arr, [4, 2, 1]) #0, 1, 2, 3, 4, 5, 6, 7

Or even more fun:

viewer=lambda arr: 2*np.dot(arr, [4, 2, 1]) + 1 #1, 3, 5, 7, 9, 11, 13, 15
viewer=lambda arr: np.vectorize(chr)(97+np.dot(arr, [4, 2, 1])) #a b c d e f g h

since you could also map

[[0, 0, 0], [0, 0, 1], [0, 1, 0], [0, 1, 1], [1, 0, 0], [1, 0, 1], [1, 1, 0], [1, 1, 1]]

to any ascending sequence you could think of like [1, 3, 5, 7, 9, 11, 13, 15] or ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h'] and the result remains the same.

Further notes

Thanks @MadPhysicist,

np.packbits(example.reshape(-1, N), axis=1, bitorder='little').ravel()

also does the trick. It's pretending to be the fastest solution since np.packbits is optimised quite well in numpy.

12 Comments

@mathfux really "It's pretending to be the fastest solution"? So it's not actually the fastest? I think this should be rephrased. But it's a cool implementation of the idea.
I think we have slightly different premises. You solve the general problem of a mapping of possible patterns (which really don't have to be 0-1 at that point), to some other arbitrary quantity. I assume that all combinations of 0 and 1 are possible, which A) limits N to 64 on most system, and B) implies that the user must provide the full 2**N sized mapping on output. My solution if faster because it solves a much simpler problem. You use sorting because numpy does not support hash tables, I use a couple of linear passes over the array.
As far as complexity goes, I would solve the problem you are solving using a hash table. Practically, it would always be slower in python, but complexity wise it would offer an O(n) solution where you are forced to use O(n log n) by the limitations of numpy.
@mathfux. Keep in mind that np.unique is np.sort, np.diff, np.flatnonzero wearing a trenchcoat.
@mathfux. It may not be literally that. The important part is that it's sort-based, which makes it O(n log n). The steps after that are equivalent to np.diff, np.flatnonzero, but they may be implemented in C. I never checked, because it's not important what the O(n) portions are doing.
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.