I'm reading CLR via C# by Jeffrey Richter and on page 115 there is an example of an overflow resulting from arithmetic operations on primitives. Can someone pls explain?
Byte b = 100;
b = (Byte) (b+200); // b now contains 44 (or 2C in Hex).
I understand that there should be an overflow, since byte is an unsigned 8-bit value, but why does its value equal 44?