This happened as the result of integer promotions. In most cases where an integer type smaller than int is used in an expression, it is first promoted to type int. On most platforms, this is a 32-bit type.
This is spelled out in section 6.3.1.1p2 of the C24 (draft) standard:
The following may be used in an expression wherever an int or unsigned int may be used:
- An object or expression with an integer type (other than
int or unsigned int) whose integer
conversion rank is less than or equal to the rank of int and unsigned int.
- A bit-field of type
bool, int, signed int, or unsigned int.
The value from a bit-field of a bit-precise integer type is converted to the corresponding bit-precise
integer type. If the original type is not a bit-precise integer type (6.2.5): if an int can represent all
values of the original type (as restricted by the width, for a bit-field), the value is converted to an
int otherwise, it is converted to an unsigned int. These are called the integer promotions. All
other types are unchanged by the integer promotions
So assuming an int is at least 32 bits on your platform, a uint16_t has a smaller rank and therefore the value is promoted to type int.
There is still a problem however. In the initialization int16_t value = 0xBEEF, the value to set doesn't fit in a int16_t and so gets converted to a negative value. This negative value is promoted to type int in the expression value << 16 and that negative value is left shifted. Left-shifting a negative value triggers undefined behavior.
Changing the type of value to uint16_t isn't enough. If that's all that is done, the positive value 0xBEEF left-shifted by 16 won't fit into a signed int and again triggers undefined behavior.
An explicit cast to an unsigned type is required here to have well defined behavior. Also note that the bitwise AND is unnecessary since no bits will be stripped off.
uint16_t value = 0xBEEF;
data[i] = ((uint32_t)value << 16) | value;
intwill be promoted toint. Sovalue << 16is essentially the same as((int) value) << 16. The value16is already anint.value == 0x0000BEEF, thenvalue << 16will be0xBEEF0000. andvalue & 0xFFFFwill be0x0000BEEF. That leads to0xBEEF0000 | 0x0000BEEFwhich is is equal to0xBEEFBEEF.malloc(or any function returning the typevoid *).int16_t, the result would NOT be guaranteed zero; shifting with a count greater than or equal to the width (number of nonpadding bits) is Undefined Behavior. Also, treating a malloc ofdata_sizebytes as large enough to containdata_sizeuint32_telements is seriously wrong on any implementation that hasint16_t(and nearly all others as well).