to read bitgroups, I've written a little function for myself, which reads a bit group and returns the value. Actually, I thought the appended example should lead to an overflow in the following command
(BitGroupMask << BitGroupPosition)
According to the standard, intermediate results are interpreted as integers. However, the result is correct for all tested values. Apparently, the compiler checks the largest data type in the entire Expression. At least, my guess.
My question: Is the behavior dependent on the compiler or is it defined in the C++ standard?
Code:
#include <iostream>
#include <bitset>
using namespace std;
uint64_t Variable{0b1011111111111111111111111111111111111111111111111111111111111111ULL};
uint8_t GMask{0b1111};
uint8_t GPos{60};
template <typename VarType, typename MaskType>
inline VarType readBitGroup(VarType Var, MaskType BitGroupMask, MaskType BitGroupPosition)
{
//return (VarType)((Var & ((VarType)BitGroupMask << BitGroupPosition)) >> BitGroupPosition);
return ((Var & (BitGroupMask << BitGroupPosition)) >> BitGroupPosition);
}
int main()
{
cout << std::bitset<64>(Variable) << std::endl;
cout << std::bitset<4>(readBitGroup(Variable, GMask, GPos)) << std::endl;
return 0;
}