I am trying to read a floating point number stored as a specific binary format in a char array. The format is as follows, where each letter represents a binary digit:
SEEEEEEE MMMMMMMM MMMMMMMM MMMMMMMM MMMMMMMM MMMMMMMM MMMMMMMM MMMMMMMM
The format is more clearly explained in this website. Basically, the exponent is in Excess-64 notation and the mantissa is normalized to values <1 and >1/16. To get the true value of the number the mantissa is multipled by 16 to the power of the true value of the exponent.
Basicly, what I've done so far is to extract the sign and the exponent values, but I'm having trouble extracting the mantissa. The implementation I'm trying is quite brute force and is probably far from ideal in terms of code but it seemed to me as the simplest. It basicly is:
unsigned long a = 0;
for(int i = 0; i < 7; i++)
a += static_cast<unsigned long>(m_bufRecord[index+1+i])<<((6-i)*8);
It takes every 8-bit byte size stored in the char array and shifts it left according to its index in the array. So if the array I have is as follows:
{0x3f, 0x28, 0xf5, 0xc2, 0x8f, 0x5c, 0x28, 0xf6}
I'm expecting a to take the value:
0x28f5c28f5c28f6
However, with the above implementation a takes the value:
0x27f4c18f5c27f6
Later, I convert the long integer to a floating number using the following code:
double m = a;
m = m*(pow(16, e-14));
m = (s==1)?-m:m;
What is going wrong here? Also, I'd love to know how a conversion like this would be implemented ideally?
longactually have enough size for 56 bits? In any event you could write the binary representation of an IEE754doubledirectly without the intermediate step.double?