How can I get a string of 0s and 1s, according to the bits of the IEEE 754 representation of a 32 bit float?
For example, given an input 1.00, the result should be '00111111100000000000000000000000'.
You can do that with the struct package:
import struct
def binary(num):
return ''.join('{:0>8b}'.format(c) for c in struct.pack('!f', num))
That packs it as a network byte-ordered float, and then converts each of the resulting bytes into an 8-bit binary representation and concatenates them out:
>>> binary(1)
'00111111100000000000000000000000'
Edit: There was a request to expand the explanation. I'll expand this using intermediate variables to comment each step.
def binary(num):
# Struct can provide us with the float packed into bytes. The '!' ensures that
# it's in network byte order (big-endian) and the 'f' says that it should be
# packed as a float. Alternatively, for double-precision, you could use 'd'.
packed = struct.pack('!f', num)
print 'Packed: %s' % repr(packed)
# For each character in the returned string, we'll turn it into its corresponding
# integer code point
#
# [62, 163, 215, 10] = [ord(c) for c in '>\xa3\xd7\n']
integers = [ord(c) for c in packed]
print 'Integers: %s' % integers
# For each integer, we'll convert it to its binary representation.
binaries = [bin(i) for i in integers]
print 'Binaries: %s' % binaries
# Now strip off the '0b' from each of these
stripped_binaries = [s.replace('0b', '') for s in binaries]
print 'Stripped: %s' % stripped_binaries
# Pad each byte's binary representation's with 0's to make sure it has all 8 bits:
#
# ['00111110', '10100011', '11010111', '00001010']
padded = [s.rjust(8, '0') for s in stripped_binaries]
print 'Padded: %s' % padded
# At this point, we have each of the bytes for the network byte ordered float
# in an array as binary strings. Now we just concatenate them to get the total
# representation of the float:
return ''.join(padded)
And the result for a few examples:
>>> binary(1)
Packed: '?\x80\x00\x00'
Integers: [63, 128, 0, 0]
Binaries: ['0b111111', '0b10000000', '0b0', '0b0']
Stripped: ['111111', '10000000', '0', '0']
Padded: ['00111111', '10000000', '00000000', '00000000']
'00111111100000000000000000000000'
>>> binary(0.32)
Packed: '>\xa3\xd7\n'
Integers: [62, 163, 215, 10]
Binaries: ['0b111110', '0b10100011', '0b11010111', '0b1010']
Stripped: ['111110', '10100011', '11010111', '1010']
Padded: ['00111110', '10100011', '11010111', '00001010']
'00111110101000111101011100001010'
replace and rjust to 32 (or 64), rather than one for each byte.''.join('{:0>8b}'.format(c) for c in struct.pack('!f', num))f) with the double format (d), i.e. ''.join('{:0>8b}'.format(c) for c in struct.pack('!f', num))Here's an ugly one ...
>>> import struct
>>> bin(struct.unpack('!i',struct.pack('!f',1.0))[0])
'0b111111100000000000000000000000'
Basically, I just used the struct module to convert the float to an int ...
Here's a slightly better one using ctypes:
>>> import ctypes
>>> bin(ctypes.c_uint32.from_buffer(ctypes.c_float(1.0)).value)
'0b111111100000000000000000000000'
Basically, I construct a float and use the same memory location, but I tag it as a c_uint32. The c_uint32's value is a python integer which you can use the builtin bin function on.
Note: by switching types we can do reverse operation as well
>>> ctypes.c_float.from_buffer(ctypes.c_uint32(int('0b111111100000000000000000000000', 2))).value
1.0
also for double-precision 64-bit float we can use the same trick using ctypes.c_double & ctypes.c_uint64 instead.
sizeof(int) == sizeof(float) (use '!' to force 4 bytes for i format). ctypes.sizeof(ctypes.c_int) might depend on platform. There is int.from_bytes() on Python 3.2+bin returns the IEEE standard representation...bin function doesn't guarantee much about the output -- Only that it is an object that python can handle. If the sizeof(int) != sizeof(float) then it's not using IEEE 754 (is it?). In that case, the bit pattern returned by bin could be anything as well -- e.g. the bits could be reported backward or something due to different endianness. The sign bit could be some place else, etc. etc.sizeof(int) != sizeof(float) issue is unrelated to bin() (that works on Python integers that are unlimited). To support negative floats, use !I format.ctypes variant: when working with negative numbers the result is all wrong, probably due to python applying 2's complement or some other shenanigans somewhere in the middle. When trying your example with the number -1.0 the actual encoded IEEE 754 in the result was 4.0 ('-0b1000000100000000000000000000000')... It seems to work correctly when replacing ctypes.c_int in your example with ctypes.c_uint. EDIT: massive kudos for the quick response! :)Found another solution using the bitstring module.
import bitstring
f1 = bitstring.BitArray(float=1.0, length=32)
print(f1.bin)
Output:
00111111100000000000000000000000
For the sake of completeness, you can achieve this with numpy using:
f = 1.00
int32bits = np.asarray(f, dtype=np.float32).view(np.int32).item() # item() optional
You can then print this, with padding, using the b format specifier
print('{:032b}'.format(int32bits))
int32bits = np.float32(1.0).view(np.int32).item() if you truly want a python intnp.int32 objectint32bits = np.float32(1.0).view(np.int32).item() :)With these two simple functions (Python >=3.6) you can easily convert a float number to binary and vice versa, for IEEE 754 binary64.
import struct
def bin2float(b):
''' Convert binary string to a float.
Attributes:
:b: Binary string to transform.
'''
h = int(b, 2).to_bytes(8, byteorder="big")
return struct.unpack('>d', h)[0]
def float2bin(f):
''' Convert float to 64-bit binary string.
Attributes:
:f: Float number to transform.
'''
[d] = struct.unpack(">Q", struct.pack(">d", f))
return f'{d:064b}'
For example:
print(float2bin(1.618033988749894))
print(float2bin(3.14159265359))
print(float2bin(5.125))
print(float2bin(13.80))
print(bin2float('0011111111111001111000110111011110011011100101111111010010100100'))
print(bin2float('0100000000001001001000011111101101010100010001000010111011101010'))
print(bin2float('0100000000010100100000000000000000000000000000000000000000000000'))
print(bin2float('0100000000101011100110011001100110011001100110011001100110011010'))
The output is:
0011111111111001111000110111011110011011100101111111010010100100
0100000000001001001000011111101101010100010001000010111011101010
0100000000010100100000000000000000000000000000000000000000000000
0100000000101011100110011001100110011001100110011001100110011010
1.618033988749894
3.14159265359
5.125
13.8
I hope you like it, it works perfectly for me.
struct.unpack('>Q', ...) instead of int.from_bytes(..., 'big')?decode line has infiltrated unintentionally, I've edited it. Thanks for notifying. Regarding your last comment, use struct because the parameter that receives the function is a float, not an int, and the float don't have the to_bytes() method available. If you can think of a better way to do it, it's welcome :)1.0 is .100...0 * 2^(00000000001) where the exponent has 11 bits and the mantissa has 52. Thus, I think the floating representation of 1.0 should be 00000000000110000000000000000000000000000000000000000000000000000. I tried float2bin(1.0) and the result was 0011111111110000000000000000000000000000000000000000000000000000. Could you please explain the difference?1 has to be normalized as 1.0*2^0. Then, in the IEEE 754 double precision float format, the first 1 bit is assumed implicit, only the following 52 bits are stored. Finally, the exponent is represented in biased form: it is incremented by 1023 (i.e., 1111111111 in binary).This problem is more cleanly handled by breaking it into two parts.
The first is to convert the float into an int with the equivalent bit pattern:
import struct
def float32_bit_pattern(value):
return sum(ord(b) << 8*i for i,b in enumerate(struct.pack('f', value)))
Python 3 doesn't require ord to convert the bytes to integers, so you need to simplify the above a little bit:
def float32_bit_pattern(value):
return sum(b << 8*i for i,b in enumerate(struct.pack('f', value)))
Next convert the int to a string:
def int_to_binary(value, bits):
return bin(value).replace('0b', '').rjust(bits, '0')
Now combine them:
>>> int_to_binary(float32_bit_pattern(1.0), 32)
'00111111100000000000000000000000'
float32_bit_pattern can be defined as lambda x: int.from_bytes(struct.pack("f", x), byteorder="little") on Python 3.2+ord to convert the output of struct.pack. And I forgot to mention import struct.Piggy-tailing on Dan's answer with colored version for Python3:
import struct
BLUE = "\033[1;34m"
CYAN = "\033[1;36m"
GREEN = "\033[0;32m"
RESET = "\033[0;0m"
def binary(num):
return [bin(c).replace('0b', '').rjust(8, '0') for c in struct.pack('!f', num)]
def binary_str(num):
bits = ''.join(binary(num))
return ''.join([BLUE, bits[:1], GREEN, bits[1:10], CYAN, bits[10:], RESET])
def binary_str_fp16(num):
bits = ''.join(binary(num))
return ''.join([BLUE, bits[:1], GREEN, bits[1:10][-5:], CYAN, bits[10:][:11], RESET])
x = 0.7
print(x, "as fp32:", binary_str(0.7), "as fp16 is sort of:", binary_str_fp16(0.7))
After browsing through lots of similar questions I've written something which hopefully does what I wanted.
f = 1.00
negative = False
if f < 0:
f = f*-1
negative = True
s = struct.pack('>f', f)
p = struct.unpack('>l', s)[0]
hex_data = hex(p)
scale = 16
num_of_bits = 32
binrep = bin(int(hex_data, scale))[2:].zfill(num_of_bits)
if negative:
binrep = '1' + binrep[1:]
binrep is the result.
Each part will be explained.
f = 1.00
negative = False
if f < 0:
f = f*-1
negative = True
Converts the number to a positive if negative, and sets the variable negative to false. The reason for this is that the difference between positive and negative binary representations is just in the first bit, and this was the simpler way than to figure out what goes wrong when doing the whole process with negative numbers.
s = struct.pack('>f', f) #'?\x80\x00\x00'
p = struct.unpack('>l', s)[0] #1065353216
hex_data = hex(p) #'0x3f800000'
s is a hex representation of the binary f. it is however not in the pretty form i need. Thats where p comes in. It is the int representation of the hex s. And then another conversion to get a pretty hex.
scale = 16
num_of_bits = 32
binrep = bin(int(hex_data, scale))[2:].zfill(num_of_bits)
if negative:
binrep = '1' + binrep[1:]
scale is the base 16 for the hex. num_of_bits is 32, as float is 32 bits, it is used later to fill the additional places with 0 to get to 32. Got the code for binrep from this question. If the number was negative, just change the first bit.
I know this is ugly, but i didn't find a nice way and I needed it fast. Comments are welcome.
bin(struct.unpack('!I', struct.pack('!f', -1.))[0])[2:].zfill(32) supports positive/negative floats. To improve performance you could modify b2a_bin(struct.pack('!f', -1.)) to accept floats directly.This is a little more than was asked, but it was what I needed when I found this entry. This code will give the mantissa, base and sign of the IEEE 754 32 bit float.
import ctypes
def binRep(num):
binNum = bin(ctypes.c_uint.from_buffer(ctypes.c_float(num)).value)[2:]
print("bits: " + binNum.rjust(32,"0"))
mantissa = "1" + binNum[-23:]
print("sig (bin): " + mantissa.rjust(24))
mantInt = int(mantissa,2)/2**23
print("sig (float): " + str(mantInt))
base = int(binNum[-31:-23],2)-127
print("base:" + str(base))
sign = 1-2*("1"==binNum[-32:-31].rjust(1,"0"))
print("sign:" + str(sign))
print("recreate:" + str(sign*mantInt*(2**base)))
binRep(-0.75)
output:
bits: 10111111010000000000000000000000
sig (bin): 110000000000000000000000
sig (float): 1.5
base:-1
sign:-1
recreate:-0.75
Convert float between 0..1
def float_bin(n, places = 3):
if (n < 0 or n > 1):
return "ERROR, n must be in 0..1"
answer = "0."
while n > 0:
if len(answer) - 2 == places:
return answer
b = n * 2
if b >= 1:
answer += '1'
n = b - 1
else:
answer += '0'
n = b
return answer
b > 1 must be b >= 1. Test results: 0 is "0.", 0.5 is "0.1", 0.25 is "0.01", 0.125 is "0.001", and 0.1 is "0.0001100110" (with places = 10).Several of these answers did not work as written with Python 3, or did not give the correct representation for negative floating point numbers. I found the following to work for me (though this gives 64-bit representation which is what I needed)
def float_to_binary_string(f):
def int_to_8bit_binary_string(n):
stg=bin(n).replace('0b','')
fillstg = '0'*(8-len(stg))
return fillstg+stg
return ''.join( int_to_8bit_binary_string(int(b)) for b in struct.pack('>d',f) )
>d is for big endian doubles (8 byte numbers) and <d is for little endian doubles.I made a very simple one. please check it. and if you think there was any mistake please let me know. this works fine for me.
sds=float(input("Enter the number : "))
sf=float("0."+(str(sds).split(".")[-1]))
aa=[]
while len(aa)<15:
dd=round(sf*2,5)
if dd-1>0:
aa.append(1)
sf=dd-1
else:
sf=round(dd,5)
aa.append(0)
des=aa[:-1]
print("\n")
AA=([str(i) for i in des])
print("So the Binary Of : %s>>>"%sds,bin(int(str(sds).split(".")[0])).replace("0b",'')+"."+"".join(AA))
or in case of integer number just use bin(integer).replace("0b",'')
Here's a very terse version of @JavDomGum's answer, for those that want something quick-and-dirty and which is easier to paste into a debug console.
import struct
b2f = lambda bi: struct.unpack(">d", int(bi, 2).to_bytes(8))[0]
f2b = lambda fl: f'{struct.unpack(">Q", struct.pack(">d", fl))[0]:064b}'
You can use the .format for the easiest representation of bits in my opinion:
my code would look something like:
def fto32b(flt):
# is given a 32 bit float value and converts it to a binary string
if isinstance(flt,float):
# THE FOLLOWING IS AN EXPANDED REPRESENTATION OF THE ONE LINE RETURN
# packed = struct.pack('!f',flt) <- get the hex representation in (!)Big Endian format of a (f) Float
# integers = []
# for c in packed:
# integers.append(ord(c)) <- change each entry into an int
# binaries = []
# for i in integers:
# binaries.append("{0:08b}".format(i)) <- get the 8bit binary representation of each int (00100101)
# binarystring = ''.join(binaries) <- join all the bytes together
# return binarystring
return ''.join(["{0:08b}".format(i) for i in [ord(c) for c in struct.pack('!f',flt)]])
return None
Output:
>>> a = 5.0
'01000000101000000000000000000000'
>>> b = 1.0
'00111111100000000000000000000000'
Let's use numpy!
import numpy as np
def binary(num, string=True):
bits = np.unpackbits(np.array([num]).view('u1'))
if string:
return np.array2string(bits, separator='')[1:-1]
else:
return bits
e.g.,
binary(np.pi)
# '0001100000101101010001000101010011111011001000010000100101000000'
binary(np.pi, string=False)
# array([0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1,
# 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0,
# 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0],
# dtype=uint8)