I am working on a project, comparing times of processing queries with and without caching on different DB systems. Now I am using nodeJS and mongodb.
I have a txt file with queries (or more like conditions) that looks like this:
{'c1' : { $regex: /^A/ }, 'c2' : 'something', 'c3' : '8'}
{'c1' : { $regex: /^B/ }, 'c2' : 'somethingElse', 'c3' : '12'}
{'c1' : { $regex: /^C/ }, 'c2' : 'somethingDifferent', 'c3' : '16'}
...
And I need to read all these strings/objects from the file, make a query from each one of them and run them on the database (and measure the time it takes to finish all of these queries).
So my idea is to read the file line by line using a lineReader and convert the line to a query immediately, e.g.:
var lineReader = require('readline').createInterface({
input: require('fs').createReadStream('file.txt')
});
lineReader.on('line', function (line) {
query = line;
//I only get this output
console.log(query);
query = JSON.stringify(query);
dbo.collection('myCol').find(JSON.parse(query)).toArray(function(err, result) {
//This code is never reached
if (err) throw err;
console.log(result);
});
}
db.close();
But this approach is wrong, because I never get any result from find(query).toArray() and the program crashes with
MongoError: pool destroyed
every time.
I tried several different solutions, but I always ended up with this error or with MongoNetworkError: connection destroyed, not possible to instantiate cursor, or even the process running out of memory.
EDIT: JSON parsing
MongoError: pool destroyederror.