I'm new to Python, I was reading this page where I saw a weird statement:
if n+1 == n: # catch a value like 1e300
raise OverflowError("n too large")
x equals to a number greater than it?! I sense a disturbance in the Force.
I know that in Python 3, integers don't have fixed byte length. Thus, there's no integer overflow, like how C's int works. But of course the memory can't store infinite data.
I think that's why the result of n+1 can be the same as n: Python can't allocate more memory to preform the summation, so it is skipped, and n == n is true. Is that correct?
If so, this could lead to incorrect result of the program. Why don't Python raise an error when operations are not possible, just like C++'s std::bad_alloc?
Even if n is not too large and the check evaluates to false, result - due to the multiplication - would need much more bytes. Could result *= factor fail for the same reason?
I found it in the offical Python documentation. Is it really the correct way to check big integers / possible integer "overflow"?
ints like 1e300. The int would have to be seriously huge for that to happen due to memory reasons. It does catch floats though, for the obvious reason. n must be a float.n (+1)is skipped, it may see and work with only(n) (==) (n).**operator to get large integers.1e16, but accepts10 ** 5000000which is bad I think, because1e16 < 10 ** 50so that check is only for float inputs (there is no implicit conversion).