The following code combines two bytes into one 16 bit integer.
unsigned char byteOne = 0b00000010; // 2
unsigned char byteTwo = 0b00000011; // 3
uint16_t i = 0b0000000000000000;
i = (byteOne << 8) | byteTwo; //515
I'm trying to understand WHY this code works.
If we break this down and just focus on one byte, byteOne; This is an 8 bit value equal to 00000010. So, left-shifting this by 8 bits should always yield 00000000 (as the bits shifted off the end are lost), right? This seems to be the case with the following code:
uint8_t i = (byteOne << 8); // equal to 0, always, no matter what 8 bit value is assigned to byteOne
But if this way of thinking was correct, then
uint16_t i = (byteOne << 8) | byteTwo;
Should be equivalent to
uint16_t i = 0 | byteTwo; // Because 0b00000010 << 8 == 0b00000000
Or just
uint16_t i = byteTwo; // Because 0b00000000 | 0b00000011 == 0b00000011
But they're not equivalent and this is throwing me off. Is byteOne being cast/converted into a 16 bit int before the shifting operation? That would explain what's going on here as then
0b0000000000000010 << 8 == 0b0000001000000000 // 512
If byteOne isn't being converted into a 16 bit int before the shifting operation, then please explain why the (byteOne << 8) isn't evaluating to 0 when assigning to a 16 bit integer.
intfrom small types.auto i = (byteOne << 8);and seen what type the compiler chooses to givei(hence is the type of the expressionbyteOne << 8)?