Using numpy.unpackbits is faster if fp is a large NumPy array:
(np.unpackbits(fp.astype('>i4').view('4,uint8'), axis=1).T.astype(bool))
astype('>i4') converts fp to an array of big-endian 32-bit ints, then
view('4,uint8') views (or, perhaps you might say re-interprets) the 32-bit
ints as four 8-bit ints. This is done since unpackbits expects an array of
unsigned 8-bit ints. Big-endian format is used to ensure the most significant
bits are on the left -- this arranges the values so np.unpackbits returns the
bits in the desired order.
In [280]: fp = np.array([-15707075, -284140225])
In [281]: fp.astype('>i4').view('4,uint8')
Out[281]:
array([[255, 16, 84, 61],
[239, 16, 93, 63]], dtype=uint8)
In [282]: np.unpackbits(fp.astype('>i4').view('4,uint8'), axis=1)
Out[282]:
array([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1,
0, 0, 0, 0, 1, 1, 1, 1, 0, 1],
[1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1,
0, 1, 0, 0, 1, 1, 1, 1, 1, 1]], dtype=uint8)
import numpy as np
fp = np.array([-15707075, -284140225])
expected = (np.transpose(np.array([[b == '1' for b in list('{:32b}'.format(i & 0xffffffff))] for i in fp])))
result = (np.unpackbits(fp.astype('>i4').view('4,uint8'), axis=1).T.astype(bool))
assert (expected == result).all()
Using np.unpackbits is about 72x faster (on my machine) for an array of size 100. The speed
advantage increases with the length of fp:
In [241]: fp = np.random.random(size=100).view('int32')
In [276]: %timeit expected = (np.transpose(np.array([[b == '1' for b in list('{:32b}'.format(i & 0xffffffff))] for i in fp])))
100 loops, best of 3: 2.22 ms per loop
In [277]: %timeit result = (np.unpackbits(fp.astype('>i4').view('4,uint8'), axis=1).T.astype(bool))
10000 loops, best of 3: 30.6 µs per loop