1

I'm currently writing an app in Java that opens a socket and should receive and send data over it.

It is my understanding that BufferedReader.readLine() returns null once the buffer is empty. However, my code doesn't exit from the loop that reads the lines from the BufferedReader. The idea is thata i receive a list of songs and then send a value to the server where a song starts playing.

This is the code:

package me.frankvanbever.MediaServerClient;

import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.io.OutputStream;
import java.net.Socket;
import java.net.UnknownHostException;

        public class MediaServerClient {

            /**
             * @param args
             */
            public static void main(String[] args) {

                    Socket server;
                    try {
                        server = new Socket( "10.42.0.41" , 2626 );

                        InputStream in = server.getInputStream();
                        OutputStream out = server.getOutputStream();

                        BufferedReader bin = new BufferedReader( new InputStreamReader( in ) , 4096);

                        String inputline;
                        while( (inputline = bin.readLine()) != null){

                            System.out.println(inputline);

                        }

                        System.out.println("exited loop");
                        out.write('1');
                    } catch (UnknownHostException e) {
                    } catch (IOException e) {
                    }


            }

    }

Is this normal behaviour? The server is written in python and I can change the source code.

1 Answer 1

6

It is my understanding that BufferedReader.readLine() returns null once the buffer is empty.

No. It will return null once the underlying return has been closed. If the server you're connecting to doesn't close the connection, BufferedReader will just hang, waiting for the next line of text.

Don't forget that TCP is stream-oriented (as is BufferedReader). There's no indication of "a complete message" unless you put it in your protocol. That's why protocols often include things like message terminators, or they specify how much data is in a message before sending it.

Sign up to request clarification or add additional context in comments.

3 Comments

so I'd have to hard code a way to let my client know the server has stopped sending stuff?
@FrankVanbever: You'd need to build it into the protocol itself - both the server and the client would need to agree about this. You could effectively have something along the lines of "if I haven't received anything for 5 seconds, I'll assume we're done" but that's very fragile in the face of slow networks, and slow in the face of speedy networks, as well as being relatively hard to code unless your platform supports timeouts.
@FrankVanbever Or you could close the connection when there is nothing more to send, or if you need to keep the connection open you could send an empty like or a line like [End of Message] or something of you choice. BTW email sends . on a line by itself.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.