0

I have the following code(Server is Tomcat/Linux).

// Send the local file over the current HTTP connection

  FileInputStream fin = new FileInputStream(sendFile);
    int readBlockSize;
    int totalBytes=0;
    while ((readBlockSize=fin.available())>0) {                     
            byte[] buffer = new byte[readBlockSize];            
            fin.read(buffer, 0, readBlockSize);
            outStream.write(buffer, 0, readBlockSize);
            totalBytes+=readBlockSize;
    }

With some files of type 3gp When i attach the debugger, in line:

outStream.write(buffer, 0, readBlockSize);

it breaks out the while with the following error; ApplicationFilterChain.internalDoFilter(ServletRequest, ServletResponse) line:299 And the file is not served.

Any clues? Thanks A.K.

2
  • 2
    can you post the full exception. The filter chain is most likely not your problem. Commented Oct 20, 2010 at 16:29
  • The exception is Just that in the debugger inside Eclipse(in a new TAB) Commented Oct 20, 2010 at 17:01

3 Answers 3

3

You can't guarantee that InputStream.read(byte[], int, int) will actually read the desired number of bytes: it may read less. Even your call to available() will not provide that guarantee. You should use the return value from fin.read to find out how many bytes were actually read and only write that many to the output.

I would guess that the problem you see could be related to this. If the block read is less than the available size then your buffer will be partially filled and that will cause problems when you write too many bytes to the output.

Also, don't allocate a new array every time through the loop! That will result in a huge number of needless memory allocations that will slow your code down, and will potentially cause an OutOfMemoryError if available() returns a large number.

Try this:

int size;
int totalBytes = 0;
byte[] buffer = new byte[BUFFER_SIZE];
while ((size = fin.read(buffer, 0, BUFFER_SIZE)) != -1) {
    outStream.write(buffer, 0, size);
    totalBytes += size;
}
Sign up to request clarification or add additional context in comments.

2 Comments

I'd pick 8KB (8192). Useful blog: nadeausoftware.com/articles/2008/02/…
You can choose a suitable buffer size, depending on how big you expect the streams to be and how much memory you are willing to soak up. The suggestion of 8Kb is probably fine, but you can experiment and tune it to your system if you have some time.
1

Avoiding these types of problems is why I start with Commons IO. If that's an option, your code would be as follows.

FileInputStream fin = new FileInputStream(sendFile);
int totalBytes = IOUtils.copy(fin, outStream);

No need reinventing the wheel.

Comments

0

It is possible that the .read() call returns less bytes than you requested. This means you need to use te returnvalue of .read() as argument to the .write() call:

        int bytesRead = fin.read(buffer, 0, readBlockSize);
        outStream.write(buffer, 0, bytesRead);

apart from this, it is better to pre-allocate a buffer and use it (your could could try to use a 2Gb buffer if your file is large :-))

byte[] buffer = new byte[4096]; // define a constant for this max length

while ((readBlockSize=fin.available())>0) {                     
    if (4096 < readBlockSize) {
        readBlockSise = 4096;
    }

2 Comments

No. @Antonis, your major mistake is expecting that fin.available() returns the amount of remaining bytes of the entire stream. This is wrong. It returns the amount of bytes which can currently be read without blocking other threads. It can as good return zero while there are still more bytes in the stream. The answer of Cameron contains the correct idiom to stream bytes.
Well the files that i test with the servlet are 30Kb to 2Mb.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.