Given this code:
int x = 20000;
int y = 20000;
int z = 40000;
// Why is it printing WTF? Isn't 40,000 > 32,767?
if ((x + y) == z) Console.WriteLine("WTF?");
And knowing an int can hold −32,768 to +32,767. Why doesn't this cause an overflow?
Given this code:
int x = 20000;
int y = 20000;
int z = 40000;
// Why is it printing WTF? Isn't 40,000 > 32,767?
if ((x + y) == z) Console.WriteLine("WTF?");
And knowing an int can hold −32,768 to +32,767. Why doesn't this cause an overflow?
In C#, the int type is mapped to the Int32 type, which is always 32-bits, signed.
Even if you use short, it still won't overflow because short + short returns an int by default. If you cast this int to short - (short)(x + y) - you'll get an overflowed value. You won't get an exception though. You can use checked behavior to get an exception:
using System;
namespace TestOverflow
{
class Program
{
static void Main(string[] args)
{
short x = 20000;
short y = 20000;
short z;
Console.WriteLine("Overflowing with default behavior...");
z = (short)(x + y);
Console.WriteLine("Okay! Value is {0}. Press any key to overflow " +
"with 'checked' keyword.", z);
Console.ReadKey(true);
z = checked((short)(x + y));
}
}
}
You can find information about checked (and unchecked) on MSDN. It basically boils down to performance, because checking for overflow is a little bit slower than ignoring it (and that's why the default behavior is usually unchecked, but I bet that in some compilers/configurations you'll get an exception on the first z assignment.)
http://msdn.microsoft.com/en-us/library/5kzh1b5w.aspx
Type: int Range: -2,147,483,648 to 2,147,483,647
While everyone is correct in saying that an "int" type on a 32 bit machine is most likely 2^32, there is a glaring flaw in your methodology.
Let's assume that int was 16 bit. You're assigning a value that will overflow z, so z itself is overflowed. When you calculate x+y you're also overflowing the int type, it's very likely that both cases will overflow to the same value, meaning you'd hit your equality regardless(this is probably compiler dependent, I'm not quite sure whether x+y will be promoted).
The correct way to do your experiment would be for z to have a larger data type than x and y. For example(Sorry for plain C, I'm not much of C# person. Hopefully it illustrated the methodology, however.)
int x = INT_MAX;
int y = INT_MAX;
int sum = x + y;
long long z = INT_MAX+INT_MAX;
if(sum == z)
printf("Why didn't sum overflow?!\n");
Comparing sum and z is important as comparing x+y and z may still come out fine depending on how the compiler handles promotion.
Because an int in .NET is a signed 32 bit number with a range of -2,147,483,648 to 2,147,483,647.
Reference : http://msdn.microsoft.com/en-us/library/5kzh1b5w(VS.80).aspx
The int keyword maps to the .NET Framework Int32 type, which can hold integers in the range from -2,147,483,648 to 2,147,483,647.
First of all your code is in the range for int... However if it were not in the range then it wont complain either... coz you are never assigning a value back to any variable after doing X+Y in your if check...
Suppose if you were doing X * Y then it'll be calculated and the result would be a long value then the value from variable Z is taken and promoted to a long then both would be compared... Remember the casting from a lower range primitive to upper range primitive value is implicit.
int x = 200000; //In your code it was 20000
int y = 200000; //In your code it was 20000
int z = 40000;
// Why is it printing WTF? Isn't 40,000 > 32,767?
// Note: X + Y = 200000 and not < 32,767
// would pass compiler coz you are not assigning and values are compared as longs
// And since it's not equals to 40,000 the WTF did not got printed
if ((x + y) == z) Console.WriteLine("WTF?");
// And x * y >= z is true WTF MULTIPLY got printed
if ((x * y) >= z) Console.WriteLine("WTF MULTIPLY?");
// Compiler would fail since x can't hold 40,00,00,00,000
x = x * y;