36

I have JSON file which has json data of size 914MB. I loading the file with fs-extra and parsing it. But when i parse it get error as

cannot create a string longer than 0x1fffffe8 characters

Below is code

        const fs = require('fs-extra');
        const rawdata = fs.readFileSync('src/raw.json');
        const data = JSON.parse(rawdata);

I am running the project with npm and to run i have below command in package.json.

"scripts": {
   
    "start:dev": "cross-env NODE_OPTIONS='--max-old-space-size=4096' ts-node -r tsconfig-paths/register ./src --env=development",
  
  }
12
  • 2
    Your server process doesn't have enough memory. Operating systems impose limits on resources single process can consume. There are usually ways to instruct the OS that your process should be given more memory, but without knowing anything else about what you're doing it's impossible to provide more information. Commented Jul 2, 2021 at 18:48
  • 2
    I've worked with very large databases and processing a 900MB JSON file is never something I've had to do. For one thing, JSON is a really inefficient storage format. Commented Jul 2, 2021 at 20:17
  • 2
    See this Node changelist.. Node has a maximum string length of about 512MB, and that cannot be changed. It is part of the Node architecture. Commented Jul 4, 2021 at 19:13
  • 2
    Try using a streaming JSON parser instead. Commented Jul 4, 2021 at 19:19
  • 1
    Yes, a streaming parser would be good, though I would not be surprised if another memory limit might get involved once the code starts assembling the data structure itself. Again, knowing the application details might allow people to provide suggestions for an architectural change. Commented Jul 4, 2021 at 19:28

2 Answers 2

43
+50

0x1fffffe8 is almost exactly 512MB.

The many commenters are correct: you are bumping up against a system limit. I agree with @Pointy that it is mostly likely a Node string length limit. fs-extra has nothing to do with the limit

In any case, you're going to have to process that JSON in chunks. Below are different ways to do this.

A: Use a SAX-style JSON parser

You have many parser options. To get you started, here are a couple I found on NPM:

  • BFJ has a walk function that does SAX-style parsing. BFJ is archived, but still has millions of weekly downloads.

  • stream-json

B: Implement a Node Streams pipeline

Almost certainly your massive JSON data is a array at the root level. This approach uses a parser that can asynchronously process each element in that array individually, or in batches, whichever makes sense. It is based on the very powerful and flexible Node Streams API.

ℹ️ If your data isn't an JSON array, but a stream of concatenated JSON objects, then it probably conforms to the JSON Streaming protocol. See option D below.

  • JSONStream lets you filter by path or pattern in its streaming parse. It is archived, but still has millions of weekly downloads

  • BFJ - in addition to supporting SAX-style walk function mentioned above, it does selective object level streaming:

    match returns a readable, object-mode stream and asynchronously parses individual matching items from an input JSON stream.

  • stream-json has a Pick pipeline operator that can pick desired items out of a stream, ignoring the rest. Many other options.

  • jsonparse

C: Manual chunking

🚩 This will likely be the most efficient if your data supports it.

This option is like B, except instead of employing a streaming parser, you do the chunking yourself. This is easy to do if the elements of the JSON data array are very regular, e.g. each element occupies exactly N lines. You can easily extract them without parsing.

For example, if your data looked like this:

{
  data: [
    { name: ...,
      address: ... },
    { name: ...,
      address: ... },
    { name: ...,
      address: ... },
    { name: ...,
      address: ... }
  ]
}

Your process would be something like this:

  1. Use a buffered reader to read the file. (DO NOT synchronously read it all into memory)
  2. Discard the first two lines
  3. Read the file in chunks, two lines at a time
  4. If a chunk starts with {, remove any trailing comma and parse each individual {name:..., address:...} record.
  5. If it doesn't, you have reached the end of the array. Discard the rest of the file or hand it off to some other process if you expect some other data there.

The details will depend on your data.

D: Use a JSON Streaming protocol parser

The JSON Streaming protocol is a stream of multiple JSON objects concatenated in a stream. If that's what you have, you should use a parser that supports this protocol.

Sign up to request clarification or add additional context in comments.

7 Comments

What do you mean they don't work? Please explain in detail. I can't help you with a comment like that.
Your case is a very common programming problem. It's not special at all. You are definiutely not doing something right. Are you STILL trying to load it into memory as ONE string? If you are, then you aren't understanding what we've all said, nor do you understand my solution for you, and you will keep getting the same error no matter what you try. You need to PROCESS it in chunks, NOT load it into a string using an alternate JSON parser.
Tried "most" of them? stream-json linked in this answer works with huge JSON files.
0x1fffffe8 bytes is neither 512 MB nor 512 MiB. It’s some bytes shy of 512 MiB.
And also, it’s about the length of a string, which is counted in UTF-16 units, not the byte count.
|
5

The V8 string size limit

import NodeBuffer from "node:buffer";

NodeBuffer.constants.MAX_STRING_LENGTH

Represents the largest length that a string primitive can have, counted in UTF-16 code units.

This value may depend on the JS engine that is being used.

The limitation imposed on a string is not about the byte count.


Node.js uses the V8 engine. A Javascript string consists of 2-octet (uint16_t) code units, and in the V8 engine, the maximum number of such units in a string (string.length) is

  • (2²⁸-16) (0x0FFFFFF0 2-octet code units / about 0.5 GiB) on a 32-bit system
  • and (2²⁹-24) (0x1FFFFFE8 2-octet code units / about 1 GiB) on a 64-bit system.

node/lib/buffer.js

const constants = ObjectDefineProperties({}, {
  MAX_LENGTH: {
    __proto__: null,
    value: kMaxLength,
    writable: false,
    enumerable: true,
  },
  MAX_STRING_LENGTH: {
    __proto__: null,
    value: kStringMaxLength,
    writable: false,
    enumerable: true,
  },
});

node/src/node_buffer.cc

  target
      ->Set(context,
            FIXED_ONE_BYTE_STRING(isolate, "kStringMaxLength"),
            Integer::New(isolate, String::kMaxLength))
      .Check();

v8/include/v8-primitive.h

/**
 * A JavaScript string value (ECMA-262, 4.3.17).
 */
class V8_EXPORT String : public Name {
 public:
  static constexpr int kMaxLength =
      internal::kApiSystemPointerSize == 4 ? (1 << 28) - 16 : (1 << 29) - 24;

v8/include/v8-internal.h

/**
 * Configuration of tagging scheme.
 */
const int kApiSystemPointerSize = sizeof(void*);

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.