0

I am developing an application to communicate with a third party application through socket. Basically, my application needs to send some requests to the server, and the server returns some data. When the server sends me back small amount of data, everything works well. But, when I request large amount data from server, my application sometimes doesn't receive complete data and it happens intermittently.

I've done some research on internet and followed the socket programming examples that I found but still, I couldn't solve the issue. Below is my current implementation.

     BufferedInputStream is = new BufferedInputStream(socket.getInputStream());

    //I know the size of data that I am expecting from the server
    byte[] buffer = new byte[length];

    int count = 0;
    int current = 0;

    while(count < length) {
        current = is.read(buffer, count, length - count);

        if(current == -1) {
            break;
        } else {                
            count += current;
        }          
    }

I know the size of data that I am expecting from the server. When the problem occurs, read() method returns -1 before the data is fully received from the server. I don't have any access to the server side implementation. Please advise me if I miss out anything in my code or if there is any better way to do it.

3 Answers 3

3

If the read method returns -1 the server has closed the socket after sending whatever it sent. Period. If you were expecting more data you were mistaken.

Possibly the server is incorrectly coded, e.g. closes its socket instead of its outermost output stream, thereby losing data. Or possibly your end is incorrect, for example by creating multiple BufferedInputStreams per socket, instead of one for the life of the socket.

Your code just replicates DataInputStream.readFully(). However what you should really be doing is processing every buffer load as received, not trying to create an arbitrarily large buffer in memory. That just wastes space and adds latency.

Sign up to request clarification or add additional context in comments.

5 Comments

Thanks! I am using only one BufferedInputStreams for whole socket's life cycle. So I'll probably have to check with the server side.
I am not very clear about the point that you mentioned the last paragraph regarding "processing every buffer load as received". Can you explain a bit more on it? Thanks a lot EJP.
The BufferedInputStream is useful when you are reading small amounts of data e.g. one byte at a time, when a single read is capable of getting much more data at once. i.e. it can turn thousands on small reads in one big read from the OS. However in your case, the maximum read size is larger than the buffer and larger than the typicaly packet size you will read. i.e. its just overhead
Hi Peter, so what is the best way to go for my case?
So the best way is to process every buffer load as received. That means sending it wherever it's going as soon as you get it instead of trying to build it all up in memory, which doesn't scale, adds latency, and isn't working. What that means in your case depends on what you are doing with the data in your case, which we don't know.
1

Your code looks basically sound. (Using a BufferedInputStream does not improve the performance in this case. If anything, it makes it slower. However, this is unlikely to cause this problem.)

It is also possible that buffering huge amounts of stuff in memory on the client side is causing the client side to go unresponsive for long enough to cause a socket timeout. But if your file size is only a few megabytes, you can probably discount this possibility.

However, I suspect that the real problem is a socket timeout that is too small on either the client or server end. But it is very difficult to diagnose this if you have no way of accessing the server-side configs and logs.

1 Comment

Thanks Stephen. I think socket timeout could be a possible issue!
0

There are some serious problems within your code:

Assuming length = 8GB

You will get a lot of problems creating a byte[] of this size.

int count
int current

this should be

long count
long current

but if you do this, the code won't compile anymore. So you have a Maximum filesize of Integer.MAX_VALUE. if you store your download to a bytearray. (Thanks to EJP)

You have to write that data to a file ore some other memory than RAM. With a 32bit JVM, you'll never create a byte[] lager than ~1,3 GB.

4 Comments

The dode won't compile if those variables are longs.
I know. It should explain why it is a bad idea, to create a byte[]. But it seems to be unclear, I will edit it.
It's a good point ckuetbach. I'll fix it later. But right now, the data size that I am talking about is around 100K so I don't think that is the cause. Thanks anyway.
100K is not the size, I'm thinking about, if I read "large data" :-)

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.