The following derives from this question but is distinct
Consider the following code sample;
uint32_t *value = malloc(sizeof(uint32_t));
*value = 0xAAAABBBB;
int16_t *subset = (int16_t*) (value);
uint32_t test = 0x00000000;
uint32_t test2 = 0x00000000;
// IMPORTANT!!!!
// *subset vs *subset & 0xFFFF
test = (*subset << 16) | *subset & 0xFFFF;
test2 = (*subset << 16) | *subset;
// Test: 0xbbbb bbbb
// Test2: 0xffff bbbb
As you can see, subset is only 16 bits wide. As such (and as commenters on the previous post stated_, & 0xFFFF is redundant since the type is only 16 bit.
This is incorrect and when tested on my compiler (gcc 15.2.1 on Linux AND gcc for noabi arm), the & operator acts outside the bitlength of *subset to the bit length of the literal 0xFFFF (which I believe from the answers to the previous question is 32), thus acting on bits which are -16 from the address of subset.
Is this a bug in GCC or is this within spec? Also why are first 16 bits FFFF?
Note: My system is little endian
subsetgets promoted to a 32-bit int in the shift left operation. I'm going to stand by . . . an expert should be along shortly. :-)