107

When I run my code, Node.js throws a "RangeError: Maximum call stack size exceeded" exception caused by too many recursive calls. I tried to increase Node.js stack size by sudo node --stack-size=16000 app, but Node.js crashes without any error message. When I run this again without sudo, then Node.js prints 'Segmentation fault: 11'. Is there a possibility to solve this without removing my recursive calls?

5
  • 4
    Why do you need such deep recursion in the first place? Commented Jan 5, 2014 at 22:36
  • 2
    Please, can you post some code? Segmentation fault: 11 usually means a bug in node. Commented Jan 6, 2014 at 6:33
  • 2
    @Dan Abramov: Why deep recursion? This can be a problem if you wish to iterate over an array or list and perform an async operation on each (e.g. some database operation). If you use the callback from the async operation to move on to the next item, then there will be at least one extra level of recursion for each item in the list. The anti-pattern provided by heinob below stops the stack from blowing out. Commented Dec 11, 2014 at 15:58
  • 1
    @PhilipCallender I didn't realize you were doing async stuff, thanks for clarification! Commented Dec 11, 2014 at 20:18
  • 1
    @DanAbramov Doesn't have to be deep either to crash. V8 doesn't get the chance to clean out stuff allocated on the stack. Functions called earlier which have long since stopped executing might have created variables on the stack which are not referenced anymore but still held in memory. If you are doing any intensive time consuming operation in a synchronous fashion and allocating variables on the stack while you're at it, you're still going to crash with the same error. I got my synchronous JSON parser to crash at a callstack depth of 9. kikobeats.com/synchronously-asynchronous Commented Sep 29, 2016 at 13:50

12 Answers 12

136

You should wrap your recursive function call into a

  • setTimeout,
  • setImmediate or
  • process.nextTick

function to give node.js the chance to clear the stack. If you don't do that and there are many loops without any real async function call or if you do not wait for the callback, your RangeError: Maximum call stack size exceeded will be inevitable.

There are many articles concerning "Potential Async Loop". Here is one.

Now some more example code:

// ANTI-PATTERN
// THIS WILL CRASH

var condition = false, // potential means "maybe never"
    max = 1000000;

function potAsyncLoop( i, resume ) {
    if( i < max ) {
        if( condition ) { 
            someAsyncFunc( function( err, result ) { 
                potAsyncLoop( i+1, callback );
            });
        } else {
            // this will crash after some rounds with
            // "stack exceed", because control is never given back
            // to the browser 
            // -> no GC and browser "dead" ... "VERY BAD"
            potAsyncLoop( i+1, resume ); 
        }
    } else {
        resume();
    }
}
potAsyncLoop( 0, function() {
    // code after the loop
    ...
});

This is right:

var condition = false, // potential means "maybe never"
    max = 1000000;

function potAsyncLoop( i, resume ) {
    if( i < max ) {
        if( condition ) { 
            someAsyncFunc( function( err, result ) { 
                potAsyncLoop( i+1, callback );
            });
        } else {
            // Now the browser gets the chance to clear the stack
            // after every round by getting the control back.
            // Afterwards the loop continues
            setTimeout( function() {
                potAsyncLoop( i+1, resume ); 
            }, 0 );
        }
    } else {
        resume();
    }
}
potAsyncLoop( 0, function() {
    // code after the loop
    ...
});

Now your loop may become too slow, because we loose a little time (one browser roundtrip) per round. But you do not have to call setTimeout in every round. Normally it is o.k. to do it every 1000th time. But this may differ depending on your stack size:

var condition = false, // potential means "maybe never"
    max = 1000000;

function potAsyncLoop( i, resume ) {
    if( i < max ) {
        if( condition ) { 
            someAsyncFunc( function( err, result ) { 
                potAsyncLoop( i+1, callback );
            });
        } else {
            if( i % 1000 === 0 ) {
                setTimeout( function() {
                    potAsyncLoop( i+1, resume ); 
                }, 0 );
            } else {
                potAsyncLoop( i+1, resume ); 
            }
        }
    } else {
        resume();
    }
}
potAsyncLoop( 0, function() {
    // code after the loop
    ...
});
Sign up to request clarification or add additional context in comments.

14 Comments

There were some good and bad points in your answer. I really liked that you mentioned setTimeout() et al. But there is no need to use setTimeout(fn, 1), since setTimeout(fn, 0) is perfectly fine (so we don't need the setTimeout(fn, 1) every % 1000 hack). It allows the JavaScript VM to clear the stack, and immediately resume execution. In node.js the process.nextTick() is slightly better because it allows node.js to do some other stuff (I/O IIRC) also before letting your callback resume.
I would say it's better to use setImmediate instead of setTimeout in these cases.
@joonas.fi: My "hack" with %1000 is necessary. Doing a setImmediate/setTimeout (even with 0) on every loop is dramatically slower.
Care to update your in-code German comments with English translation...?:) I do understand but others might not be so lucky.
|
35

I found a dirty solution:

/bin/bash -c "ulimit -s 65500; exec /usr/local/bin/node --stack-size=65500 /path/to/app.js"

It just increase call stack limit. I think that this is not suitable for production code, but I needed it for script that run only once.

2 Comments

Cool trick, although personally I would suggest using correct practices to avoid mistakes and create a more well rounded solution.
For me this was an unblocking solution. I had a scenario where I was running a third party upgrade script of a database and was getting the range error. I was not going to rewrite the third party package but needed to upgrade the database → this fixed it.
10

In some languages this can be solved with tail call optimization, where the recursion call is transformed under the hood into a loop so no maximum stack size reached error exists.

But in javascript the current engines don't support this, it's foreseen for new version of the language Ecmascript 6.

Node.js has some flags to enable ES6 features but tail call is not yet available.

So you can refactor your code to implement a technique called trampolining, or refactor in order to transform recursion into a loop.

2 Comments

Thank you. My recursion call does not return value, so is there any way to call function and not wait for the result?
And does the function it alter some data, like an array, what does it do the function, what are the inputs/outputs?
5

I had a similar issue as this. I had an issue with using multiple Array.map()'s in a row (around 8 maps at once) and was getting a maximum_call_stack_exceeded error. I solved this by changing the map's into 'for' loops

So if you are using alot of map calls, changing them to for loops may fix the problem

Edit

Just for clarity and probably-not-needed-but-good-to-know-info, using .map() causes the array to be prepped (resolving getters , etc) and the callback to be cached, and also internally keeps an index of the array (so the callback is provided with the correct index/value). This stacks with each nested call, and caution is advised when not nested as well, as the next .map() could be called before the first array is garbage collected (if at all).

Take this example:

var cb = *some callback function*
var arr1 , arr2 , arr3 = [*some large data set]
arr1.map(v => {
    *do something
})
cb(arr1)
arr2.map(v => {
    *do something // even though v is overwritten, and the first array
                  // has been passed through, it is still in memory
                  // because of the cached calls to the callback function
}) 

If we change this to:

for(var|let|const v in|of arr1) {
    *do something
}
cb(arr1)
for(var|let|const v in|of arr2) {
    *do something  // Here there is not callback function to 
                   // store a reference for, and the array has 
                   // already been passed of (gone out of scope)
                   // so the garbage collector has an opportunity
                   // to remove the array if it runs low on memory
}

I hope this makes some sense (I don't have the best way with words) and helps a few to prevent the head scratching I went through

If anyone is interested, here is also a performance test comparing map and for loops (not my work).

https://github.com/dg92/Performance-Analysis-JS

For loops are usually better than map, but not reduce, filter, or find

3 Comments

couple months ago when I read your response I had no idea the gold you had in your answer. I recently discovered this very same thing for myself and Its really made me want to unlearn everything I have, its just hard to think in form of iterators sometimes. Hope this helps:: I wrote an additional example which includes promises as part of the loop and shows how to wait for the response before moving on. example: gist.github.com/gngenius02/…
I love what you did there (and hope you dont mind if i grab that snipped for my toolbox). I mostly use synchronous code, which is why i usually prefer loops. But that is a gem you got there as well, and will most likely find its way onto the next server i work on
Wow... words can't explain how good this is! I had no idea how intensive MAP functions were, I just took them for granted because they make your code cleaner & "simpler", but did't realise the overhead. Since implementing this & also (as a result) doing some Googling... I have sped up my process at least 2 to 3 fold. I also used to get some random errors (like JSON circular references) from time to time, but now those errors have gone. I assume somehow they were related, but anyways... thanks again!
1

If you don't want to implement your own wrapper, you can use a queue system, e.g. async.queue, queue.

Comments

1

Regarding increasing the max stack size, on 32 bit and 64 bit machines V8's memory allocation defaults are, respectively, 700 MB and 1400 MB. In newer versions of V8, memory limits on 64 bit systems are no longer set by V8, theoretically indicating no limit. However, the OS (Operating System) on which Node is running can always limit the amount of memory V8 can take, so the true limit of any given process cannot be generally stated.

Though V8 makes available the --max_old_space_size option, which allows control over the amount of memory available to a process, accepting a value in MB. Should you need to increase memory allocation, simply pass this option the desired value when spawning a Node process.

It is often an excellent strategy to reduce the available memory allocation for a given Node instance, especially when running many instances. As with stack limits, consider whether massive memory needs are better delegated to a dedicated storage layer, such as an in-memory database or similar.

Comments

1

I thought of another approach using function references that limits call stack size without using setTimeout() (Node.js, v10.16.0):

testLoop.js

let counter = 0;
const max = 1000000000n  // 'n' signifies BigInteger
Error.stackTraceLimit = 100;

const A = () => {
  fp = B;
}

const B = () => {
  fp = A;
}

let fp = B;

const then = process.hrtime.bigint();

for(;;) {
  counter++;
  if (counter > max) {
    const now = process.hrtime.bigint();
    const nanos = now - then;

    console.log({ "runtime(sec)": Number(nanos) / (1000000000.0) })
    throw Error('exit')
  }
  fp()
  continue;
}

output:

$ node testLoop.js
{ 'runtime(sec)': 18.947094799 }
C:\Users\jlowe\Documents\Projects\clearStack\testLoop.js:25
    throw Error('exit')
    ^

Error: exit
    at Object.<anonymous> (C:\Users\jlowe\Documents\Projects\clearStack\testLoop.js:25:11)
    at Module._compile (internal/modules/cjs/loader.js:776:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:787:10)
    at Module.load (internal/modules/cjs/loader.js:653:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:593:12)
    at Function.Module._load (internal/modules/cjs/loader.js:585:3)
    at Function.Module.runMain (internal/modules/cjs/loader.js:829:12)
    at startup (internal/bootstrap/node.js:283:19)
    at bootstrapNodeJSCore (internal/bootstrap/node.js:622:3)

Comments

1

Pre:

for me the program with the Max call stack wasn't because of my code. It ended up being a different issue which caused the congestion in the flow of the application. So because I was trying to add too many items to mongoDB without any configuration chances the call stack issue was popping and it took me a few days to figure out what was going on....that said:


Following up with what @Jeff Lowery answered: I enjoyed this answer so much and it sped up the process of what I was doing by 10x at least.

I'm new at programming but I attempted to modularize the answer it. Also, didn't like the error being thrown so I wrapped it in a do while loop instead. If anything I did is incorrect, please feel free to correct me.

module.exports = function(object) {
    const { max = 1000000000n, fn } = object;
    let counter = 0;
    let running = true;
    Error.stackTraceLimit = 100;
    const A = (fn) => {
        fn();
        flipper = B;
    };
    const B = (fn) => {
        fn();
        flipper = A;
    };
    let flipper = B;
    const then = process.hrtime.bigint();
    do {
        counter++;
        if (counter > max) {
            const now = process.hrtime.bigint();
            const nanos = now - then;
            console.log({ 'runtime(sec)': Number(nanos) / 1000000000.0 });
            running = false;
        }
        flipper(fn);
        continue;
    } while (running);
};

Check out this gist to see the my files and how to call the loop. https://gist.github.com/gngenius02/3c842e5f46d151f730b012037ecd596c

Comments

0

Please check that the function you are importing and the one that you have declared in the same file do not have the same name.

I will give you an example for this error. In express JS (using ES6), consider the following scenario:

import {getAllCall} from '../../services/calls';

let getAllCall = () => {
   return getAllCall().then(res => {
      //do something here
   })
}
module.exports = {
getAllCall
}

The above scenario will cause infamous RangeError: Maximum call stack size exceeded error because the function keeps calling itself so many times that it runs out of maximum call stack.

Most of the times the error is in code (like the one above). Other way of resolving is manually increasing the call stack. Well, this works for certain extreme cases, but it is not recommended.

Hope my answer helped you.

Comments

0

Even though you upgrade maximum stack size while initializing node, it may give you error. So please ensure to delete existing node and start once again with new set stack size.

Also do ensure that same user is used while starting the node instances which we have used while upgrading node version.

Comments

0

try not using recursion call and try using while loop instead

Comments

-8

You can use loop for.

var items = {1, 2, 3}
for(var i = 0; i < items.length; i++) {
  if(i == items.length - 1) {
    res.ok(i);
  }
}

2 Comments

var items = {1, 2, 3} is no valid JS syntax. how is this related to the question at all?
I think he meant [1,2,3]

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.