12

I'm writing my first application in Node.js. I am trying to read some data from a file where the data is stored in the JSON format.

I get this error:

SyntaxError: Unexpected token  in JSON at position 0

at Object.parse (native)

Here is this part of the code:

//read saved addresses of all users from a JSON file
fs.readFile('addresses.json', function (err, data) {
    if (data) {
        console.log("Read JSON file: " + data);
        storage = JSON.parse(data);

Here is the console.log output (and I checked the .json file itself, it's the same):

Read JSON file: {

    "addresses": []

}

That seems to me like a correct JSON. Why does JSON.parse() fail then?

3
  • 2
    Line breaks are not enabled in JSON.parse argument Commented May 25, 2017 at 8:51
  • @MysterX But the syntax error is at position 0? And JSON.parse() doesn't seem to have an argument to enable line breaks. Commented May 25, 2017 at 9:05
  • 1
    you need to set an encoding its because of BOM Commented May 25, 2017 at 9:20

7 Answers 7

29

You have a strange char at the beginning of the file.

data.charCodeAt(0) === 65279

I would recommend:

fs.readFile('addresses.json', function (err, data) {
if (data) {
    console.log("Read JSON file: " + data);
    data = data.trim(); 
    //or data = JSON.parse(JSON.stringify(data.trim()));
    storage = JSON.parse(data);
 }});
Sign up to request clarification or add additional context in comments.

4 Comments

I was having a similar issue and adding data.trim() before running JSON.parse() fixed it.
@PrashantTapase trim is not necesarilly needed but if you expect that the input for JSON.parse may be corrupted by chars that may be filtered out by trim then you can apply it. One could even do it only when it is required: try: json_data = JSON.parse(data); catch(ex) { json_data = JSON.parse(data.trim()); }
why trim is needed! :) +1 my problem solved. thank you! @bluehipy
I get TypeError: data.trim is not a function. Works without it though, after copying the JSON data into a TextEdit (MacOS) file and saving it again as a file.json with utf-8
10

JSON.parse() does not allow trailing commas. So, you need to get rid of it:

JSON.parse(JSON.stringify(data));

You can find more about it here.

3 Comments

Weird, I don't have a single comma in the file?
It means this: "addresses": []
This allows my UTF-16 Little Endian file to be loaded. My program no longer crashes. But it is pure garbage. And I realize I am the one that caused it. By using stringify instead of converting to UTF8. Looping, it had 3,600 key/value pairs. Which is about the same number of characters in my file.
5

It might be the BOM[1]. I have done a test by saving a file with content {"name":"test"} with UTF-8 + BOM, and it generated the same error.

> JSON.parse(fs.readFileSync("a.json"))
SyntaxError: Unexpected token  in JSON at position 0

And based on a suggestion here [2], you can replace it or drop it before you call JSON.parse().

For example:

var storage = {};

fs.readFile('a.json', 'utf8', function (err, data) {
    if (data) {
        console.log("Read JSON file: " + data);
        console.log(typeof(data))
        storage = JSON.parse(data.trim());
    }
});

or

var storage = {};
fs.readFile('a.json', function (err, data) {
    if (data) {
        console.log("Read JSON file: " + data);
        console.log(typeof(data))
        storage = JSON.parse(data.toString().trim());
    }
})

You can also remove the first 3 bytes (for UTF-8) by using Buffer.slice().

3 Comments

This left a space after every character in each of my keys. And a space after every character in each of my string values.
@TamusJRoyce BOM is not of length 3 if another encoding is used. You can search Wikipedia for "byte order mark" to see all possible BOMs.
My answer below shows a sample of the UTF-16 output. Removing the question mark icon (how Notepad++ shows the encoding you are mentioning) / byte marker in code doesn't solve the encoding issue.
4

try it like this

fs.readFile('addresses.json','utf-8', function (err, data) {
    if (data) {
        console.log("Read JSON file: " + data);
        storage = JSON.parse(data);

its because of the BOM that needs an encoding to be set before reading the file. its been issued in nodejs respository in github

https://github.com/nodejs/node-v0.x-archive/issues/186

1 Comment

That was my problem. Opening and saving the file as UTF-8 only, worked.
2

To further explain @Luillyfe's answer:

Ah-ha! fs.readFileSync("data.json") returns a Javascript object!

Edit: Below is incorrect...But summarizes what one might think at first!

I had through that was a string. So if the file was saved as UTF-8/ascii, it would probably not have an issue? The javascript object returned from readFileSync would convert to a string JSON.parse can parse? No need to call JSON.stringify?

I am using powershell to save the file. Which seems to save the file as UTF-16 (too busy right now to check). So I get "SyntaxError: Unexpected token � in JSON at position 0."

However, JSON.stringify(fs.readFileSync("data.json")) correctly parses that returned file object into a string JSON can parse.

Clue for me is my json file contents looking like the below (after logging it to the console):

�{ " R o o m I D _ L o o k u p " :   [
         {
                 " I D " :     1 0 ,
                 " L o c a t i o n " :     " f r o n t " ,
                 " H o u s e " :     " f r o n t r o o m "
         }
}

That doesn't seem like something a file would load into a string...

Incorrect being (this does not crash...but instead converts the json file to jibberish!):

const jsonFileContents = JSON.parse(JSON.stringify(fs.readFileSync("data.json")));

I can't seem to find this anywhere. But makes sense!

Edit: Um... That object is just a buffer. Apologies for the above!

Solution:

const fs = require("fs");

function GetFileEncodingHeader(filePath) {
    const readStream = fs.openSync(filePath, 'r');
    const bufferSize = 2;
    const buffer = new Buffer(bufferSize);
    let readBytes = 0;

    if (readBytes = fs.readSync(readStream, buffer, 0, bufferSize, 0)) {
        return buffer.slice(0, readBytes).toString("hex");
    }   

    return "";
}

function ReadFileSyncUtf8(filePath) {
    const fileEncoding = GetFileEncodingHeader(filePath);
    let content = null;

    if (fileEncoding === "fffe" || fileEncoding === "utf16le") {
        content = fs.readFileSync(filePath, "ucs2"); // utf-16 Little Endian
    } else if (fileEncoding === "feff" || fileEncoding === "utf16be") {
        content = fs.readFileSync(filePath, "uts2").swap16(); // utf-16 Big Endian
    } else {
        content = fs.readFileSync(filePath, "utf8");
    }

    // trim removes the header...but there may be a better way!
    return content.toString("utf8").trimStart();
}

function GetJson(filePath) {
    const jsonContents = ReadFileSyncUtf8(filePath);
    console.log(GetFileEncodingHeader(filePath));

    return JSON.parse(jsonContents);
}

Usage:

GetJson("data.json");

Note: I don't currently have a need for this to be async yet. Add another answer if you can make this async!

2 Comments

I have added this as a bug to node: github.com/nodejs/node/issues/23033. Give it an upvote there if you believe this is worth doing.
0

As mentioned by TamusJRoyce I ended up using the util.TextDecoder class to come up with a robust way to read both UTF-8 (without BOM) and UTF-8 (with BOM). Here's the snippit, assuming that file input.json is UTF-8 (with or without BOM) and contains valid JSON.

const fs = require('fs');
const util = require('util');

const rawdata = fs.readFileSync('input.json');
const textDecoder = new util.TextDecoder('utf-8');
const stringData = textDecoder.decode(rawdata);
const objects = JSON.parse(stringData);

Comments

0
const fs = require('fs');
const myConsole = new console.Console(fs.createWriteStream('./output.json'));
myConsole.log(object);

This will create an output file with all the output which can been triggered through console.log(object).

This is the easiest way to convert the console.log() output into a file.`

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.