2

Is it possible to write non-blocking response.write? I've written a simple test to see if other clients can connect while one downloads a file:

var connect = require('connect');

var longString = 'a';
for (var i = 0; i < 29; i++) { // 512 MiB
   longString += longString;
}
console.log(longString.length)

function download(request, response) {
    response.setHeader("Content-Length", longString.length);
    response.setHeader("Content-Type", "application/force-download");
    response.setHeader("Content-Disposition", 'attachment; filename="file"');
    response.write(longString);
    response.end();
}

var app = connect().use(download);
connect.createServer(app).listen(80);

And it seems like write is blocking!

Am I doing something wrong?

Update So, it doesn't block and it blocks in the same time. It doesn't block in the sense that two files can be downloaded simultaneously. And it blocks in the sense that creating a buffer is a long operation.

2 Answers 2

2

Any processing done strictly in JavaScript will block. response.write(), at least as of v0.8, is no exception to this:

The first time response.write() is called, it will send the buffered header information and the first body to the client. The second time response.write() is called, Node assumes you're going to be streaming data, and sends that separately. That is, the response is buffered up to the first chunk of body.

Returns true if the entire data was flushed successfully to the kernel buffer. Returns false if all or part of the data was queued in user memory. 'drain' will be emitted when the buffer is again free.

What may save some time is to convert longString to Buffer before attempting to write() it, since the conversion will occur anyways:

var longString = 'a';
for (...) { ... }
longString = new Buffer(longString);

But, it would probably be better to stream the various chunks of longString rather than all-at-once (Note: Streams are changing in v0.10):

var longString = 'a',
    chunkCount = Math.pow(2, 29),
    bufferSize = Buffer.byteLength(longString),
    longBuffer = new Buffer(longString);

function download(request, response) {
    var current = 0;

    response.setHeader("Content-Length", bufferSize * chunkCount);
    response.setHeader("Content-Type", "application/force-download");
    response.setHeader("Content-Disposition", 'attachment; filename="file"');

    function writeChunk() {
        if (current < chunkCount) {
            current++;

            if (response.write(longBuffer)) {
                process.nextTick(writeChunk);
            } else {
                response.once('drain', writeChunk);
            }
        } else {
            response.end();
        }
    }

    writeChunk();
}

And, if the eventual goal is to stream a file from disk, this can be even easier with fs.createReadStream() and stream.pipe():

function download(request, response) {
    // response.setHeader(...)
    // ...

    fs.createReadStream('./file-on-disk').pipe(response);
}
Sign up to request clarification or add additional context in comments.

4 Comments

Can you elaborate on what process.nextTick is for? Is it the same as setTimeout?
@Vanuan Corrected for 2^29. And, I used process.nextTick() so each chunk is written in a different tick of the event loop.
And what is response.one?
@Vanuan Should be once rather than one. But it's EventEmitter.once() as http.ServerResponse inherits from EventEmitter.
1

Nope, it does not block, I tried one from IE and other from firefox. I did IE first but still could download file from firefox first. I tried for 1 MB (i < 20) it works the same just faster. You should know that whatever longString you create requires memory allocation. Try to do it for i < 30 (on windows 7) and it will throw FATAL ERROR: JS Allocation failed - process out of memory.

It takes time for memory allocation/copying nothing else. Since it is a huge file, the response is time taking and your download looks like blocking. Try it yourself for smaller values (i < 20 or something)

8 Comments

Isn't that string created only once, on server startup?
Yes it is, but response is constructed for every request. So it gets copied for each response. Response is I/O intensive and also consider its huge size.
For me, 1 MB is loaded in less than a second, how did you manage to click so fast?
I tried for i<30,28(256MB),26(128MB) and so on. For 1MB, I just open the link in two different browsers, but didnt save, if it is blocking the one opened afterwards cant start and finish before the second.
Are you aware of that even if don't click save, browser still downloads a file to a temporary location?
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.