3

I am testing different methods to initialise a large javascript array with zeros. So far a simple for loop with push(0) seems to outperform the other approaches by far and large (see http://jsperf.com/initialise-array-with-zeros), but I am having doubts about the validity of this test.

In practice you would create such a large array only once and cache it, so that later when you need a large initialised array again you can simply slice it. Therefore I believe the most important evaluation is the time it takes the first time this code is executed, rather than an average over many trials.

Does anyone disagree? Or does anybody know how/where I can test the timings of only one round?

Edit: In response to some misconceptions as to the rationale of allocating an array with so many zeros I would like to clarify two things.

  1. There will be no sparsity. I need to create more than one large array and use them for computations. These copies will be filled with floats and the chance for a float to be exactly zero is negligible.
  2. Not all computations are performed sequentially over the array. I believe that a function that generates the array in the process would be inefficient compared to overwriting values in an array that is passed by reference (see e.g. gl-matrix.js).

My solution is therefore to create one large zero-filled array once and then take a slice() whenever a new array is needed, then pass that copy by reference to any function to work with it. Slice is super-duper-mega fast in any browser.

Now, although you may still have concerns why I want to do this, what I am really interested in is if it is at all possible to evaluate the performance of different initialisation methods at the first time run. I want to have this timing because in my situation I will certainly only run this once.

And yes, my jsperf code likely misses some solutions. So if you have an approach that I didn't think of, please feel free to add it! Thanks!

8
  • 2
    Do you really need a million zeros or are you really using a sparse array with zero as the default value? Commented Jun 9, 2012 at 18:39
  • 2
    I don't understand why people always ask "why" when the question is clearly concerned with the situation as it is, but anyway, I have very big matrices because I work with trimeshes with quite a few vertices. Many operations work quicker if you pass the result array by reference. This approach is quicker than returning a newly created array from each function separately. The exact number of one million is arbitrary though. It might have been 100,000, but then again, what's the difference? Commented Jun 9, 2012 at 18:52
  • 2
    @Paul it's not because I don't want you to do it; often it helps understand the specifics of the question when the larger context is known. It also may help find a good solution to the real problem rather than a perceived problem. No offense meant. Commented Jun 9, 2012 at 18:56
  • 3
    Passing array by reference have nothing to do with filling it with data. You can still pass reference to absolutely empty or sparse array and treat any missing value as 0. You're not solving your problem, you're trying to solve problem that you've created with what your perceive as "solution". Commented Jun 9, 2012 at 18:58
  • 1
    @OlegV.Volkov Pointy. Thanks for your input. However regarding "youre trying to solve problem that you've created with what your perceive as solution" one could equally reason that "you are not answering the question but rather try to answer the question that you've created with what you perceive as answer". Seriously, I am asking for performance evaluation at first run and you come up with stuff about empty arrays which has nothing to do with this. Anyway, updating values in an already-filled array IS quicker than filling an empty array. Just so you know. (sorry to sound harsh. too few chars) Commented Jun 9, 2012 at 20:56

3 Answers 3

6

Testing the operation only once is very complicated, as the performance varies a lot depending on what else the computer is doing. You would have to run that single test a lot of times, and reset to the same conditions between each test. The reason that jsperf runs the test a lot of times is to get a good average to weed out the anomalies.

You should test this in different browsers, to see which method is the best overall. You will see that you get very varying results.

In Internet Explorer, the fastest methods is actually neither of the ones you tested, but a simple loop that assigns the zeroes:

for (var i = 0; i < numzeros; i++) zeros[i] = 0;
Sign up to request clarification or add additional context in comments.

3 Comments

Thanks for the explanation. I just would have thought that because jsperf aggregates over all visitors, testing the first run would still make sense. Apparently not. It's funny you mention IE. I didn't care to check at first, but now that I do, IE8 (I only have 8 to test) runs out of memory :-) and it is overall slow. So it is good to know I shouldn't be wanting to run this on IE. @jahroy if you look at the benchmark you can see that other approaches are much much faster.
i can imagine another way : for (var i=0; i<=max_iter;arr[i] = i, i++);
@Alexander: You can make many variations, but it's basically the same loop. For example you don't need to do the assignment and increment in separate statements: for (var i=0; i<=max_iter; arr[i++] = i);
1

Starting ES6, you can use fill like:

var totals = [].fill.call({ length: 5 }, 0);

1 Comment

This doesn't create an Array, it creates a plain object with keys the indexes of the array.
-2

There's no practical task that would amount to "initialise javascript array with zeros", especially a big one. You should rethink why you need 0's there. Is this a sparse array and you need 0 as default value? Then just add a conditional on access to set retrieved value to 0 instead of wasting memory and initialization time.

4 Comments

The answer is memory allocation. Overwriting values in an array is quicker than adding values to an array (and increadibly much quicker compared to an extra "conditional" as you suggest). Also, overwriting an existing array saves memory compared to creating a new one to return.
I can confirm that overwriting values is quicker: stackoverflow.com/questions/14867295/…. In this test prefilling gave average 50% performance boost. The regarding tests are named test 3 and 4 in that SO page. Also using normal variables, overwriting existing variables is about 20% quicker, so before loop it's wise to set all variables. To get 500% boost compared to prefilled array, use normal variables: variable = 1 is 5x faster than variable[0] = 1 even if both are zerofilled first...
... So access arrays only when normal variables are not suitable, and if you have to access array member many times, use normal variable as a temp var (jsperf.com/read-write-array-vs-variable).
@Timo, as already answered in question you've linked both your tests and approach are flawed. Please just stop to outwit JS engines and let their programming engineers optimize for you.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.