14

I am reading Douglas Crockford's book - Javascript the good parts - and he says:

JavaScript has a single number type. Internally, it is represented as 64-bit floating point, the same as Java’s double. Unlike most other programming languages, there is no separate integer type, so 1 and 1.0 are the same value. This is a significant convenience because problems of overflow in short integers are completely avoided...

I am not overly familiar with other languages so would like a bit of an explanation. I can understand why a 64 bit helps but his statement seems to apply to the lack of floats and doubles.

What would be (pseudo code perhaps) an example of a short integer overflow situation that wont occur in JS?

7
  • 1
    integers in JS can be from -(2^53 - 1) to (2^53 - 1) .. effectively a signed 54bit integer (but not quite really, but that's not relevant)... short integers are 16bit ... 54bits is bigger than 16bits ... so no overflow problems Commented Sep 6, 2016 at 3:47
  • example for signed short ... 32767 + 1 is 32768 in JS, in other languages it's -32768 Commented Sep 6, 2016 at 3:48
  • @JaromandaX ... should that be -(2^53 + 1)? I don't know...just merely curious. Commented Sep 6, 2016 at 3:49
  • 2
    @rnevius - no, I have it right -basically it's +/-(2^53 - 1) Commented Sep 6, 2016 at 3:49
  • @JaromandaX There are many integers outside that range that can be represented too (most representable integers are outside that range), but that is the largest range within which all integers can be represented. Commented Sep 6, 2016 at 3:52

1 Answer 1

9

Suppose you had an 8 bit unsigned number.

Here are a selection of digital and binary representations:

1: 00000001

2: 00000010

15: 00001111

255: 11111111

If you have 255 and add 1, what happens? There's no more bits, so it wraps around to

0: 00000000

Here's a demonstration in C# using uint (an unsigned 32-bit integer)

using System;

public class Program
{
    public static void Main()
    {
        uint n = 4294967294;
        for(int i = 0; i < 4; ++i)
        {
            n = n + 1;
            Console.WriteLine("n = {0}", n); 
        }

    }
}

This will output:

n = 4294967294
n = 4294967295
n = 0
n = 1

This is the problem you don't get in javascript.


You get different problems.

For example:

var n = 9007199254740991;
var m = n + 1;
var p = m + 1;
alert('n = ' + n + ' and m = ' + m + ' and p = ' + p);

You will see:

n = 9007199254740991 and m = 9007199254740992 and p = 9007199254740992

Rather than wrapping around, your number representations will shed accuracy.


Note that this 'shedding accuracy' behavior is not unique to javascript, it's what you expect from floating-point data types. Another .NET example:

using System;

public class Program
{
    public static void Main()
    {
        float n = 16777214; // 2^24 - 2
        for(int i = 0; i < 4; ++i) 
        {
            Console.WriteLine(string.Format("n = {0}", n.ToString("0")));
            Console.WriteLine("(n+1) - n = {0}", (n+1)-n);
            n = n + 1;                
        }
    }
}

This will output:

n = 16777210
(n+1) - n = 1
n = 16777220
(n+1) - n = 1
n = 16777220
(n+1) - n = 0
n = 16777220
(n+1) - n = 0
Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.