0

What is the exact binary format of a unsigned long long type (at least in visual studio).

I am doing some bit-wise operation with aliasing pointers, I set the bit through operations on unsigned char pointers then doing some other operations on unsigned long long pointers(alias).

The format of unsigned long long looks weird and because of this, it mess up the results, also, it seems that the bit-shift operation also behave strangely:

From bit-mask test it seems that this data is arranged from right: LSB->MSB : left, which means a >> shift will take the data to its more significant bit instead of the less ones, is this correct?

The compiler being ICC 13.0, and the OS is windows 7-64.

3
  • 1
    Perhaps an explanation of endianness is in order. Commented Feb 19, 2013 at 5:32
  • 1
    Is it possible that you are doing pointer trickery (casting to char*) to extract single bytes? In this case, your confusion probably comes from endianness issues. Commented Feb 19, 2013 at 5:33
  • I use unsigned char * to set bits and aliasing pointers to handle other maths ops, it seems it mess things up. Commented Feb 19, 2013 at 5:41

1 Answer 1

1

The byte layout you describe is little endian, which is the layout that Intel processors use in general. Bit shifts don't depend on the memory layout of the number, but rather the value, so endianness should not impact the value of your operations. If you are using pointer aliasing, however, like casting to a char * and using pointer arithmetic to extract segments of it, you will get endianness-dependent behaviour which, on little endian Intel processors, might not be what you were expecting.

Sign up to request clarification or add additional context in comments.

8 Comments

If you want to extract individual bytes without caring about endianness you can do a shift-and-mask operation, like (x >> 16) & 0xff to get the third-lowest-order byte.
Is there some serious performance hit invovled here by doing this, thanks. The main reason why there are several aliasing pointers here are for performance benefits, I can use 64 bit-wide data type instead (if necessary).
I highly doubt you'll notice a difference, and the compiler will probably optimize it regardless. Have you actually noticed the need for that kind of optimization?
@user0002128: "I can use 64 bit-wide data type instead"... if you care about performance, profile. I'd be surprised if it was any slower (it may well be faster) to filter out specific bits (using & and a compile time constant mask) from a 64 bit value and test for 0/non-zero than to do some filter/tests on an 8-bit slice. Same for setting bits with |=. I think you're in premature-optimisation quicksand.
@Tony D: sure, thats a last resort, the reason why I stick to unsigned char is for faster bit-wise set/access, I am now just change my code to meet the little-endian, so it is OK for now, as for why 64-bit, well, for bit-sum, 64 bit wide data is faster since there are several SIMD instructions for such tasks, and a 256-bit alias for SIMDed bitwise logical operations.
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.