1

I have two variations of the same set of Java programs [Server.java and Client.java] and [ServerTest.java and ClientTest.java]. They both do the same thing, the client connects to the server and sends pairs of integers across to the server to be multiplied and the result returned to the client, where it is then printed. This is performed 100 times each.

However, in the Test version, I create and close a new socket for each passing of an integer pair and their multiplication (100 multiplications are performed). In the normal version, I open a single persistent socket and perform all interaction with the client and close afterward.

Intuitively, I thought the approach where I create one persistent socket would be a little faster than creating, accepting and closing a socket each time - in reality, the approach where a new socket is created, accepted and closed is noticeably faster. On average, the persistent socket approach takes around 8 seconds, whereas the approach that creates a new socket every time takes around 0.4 seconds.

I checked the system call activity of both and noticed nothing different between the two. I then tested the same programs on another computer (macOS Sierra) and there was a neglible difference between the two. So it seems the problem doesn't even lie with the application code but how it interacts with the OS (I'm running Ubuntu LTS 16.04).

Does anyone know why there is such a difference in performance here, or how the issue could be investigated further? I've also checked system wide metrics (memory usage and CPU usage) when executing the programs and there seems to be plenty of memory and the CPU's have plenty of idle time.


See the code snippet of how both approaches differ below:

Creating new socket every time approach:

// this is called one hundred times
public void listen() {
    try {
        while (true) {
            // Listens for a connection to be made to this socket.                                        
            Socket socket = my_serverSocket.accept();

            DataInputStream in = new DataInputStream(socket
                    .getInputStream());

            // Read in the numbers
            int numberOne = in.readInt();
            int numberTwo = in.readInt();

            int result = numberOne * numberTwo;
            DataOutputStream out = new DataOutputStream(socket.getOutputStream());
            out.writeInt(result);
            // tidy up
            socket.close();
        }
    } catch (IOException ioe) {
        ioe.printStackTrace();
    } catch (SecurityException se) {
        se.printStackTrace();
    }
}

Persistent socket approach:

public void listen() {
    try {
        while (true) {
            // Listens for a connection to be made to this socket.
            Socket socket = my_serverSocket.accept();
            for (int i = 0; i < 100; i++) {
                DataInputStream in = new DataInputStream(socket
                        .getInputStream());

                // Read in the numbers
                int numberOne = in.readInt();
                int numberTwo = in.readInt();

                int result = numberOne * numberTwo;
                DataOutputStream out = new DataOutputStream(socket.getOutputStream());
                out.writeInt(result);
            }

            // tidy up
            socket.close();
        }
    } catch (IOException ioe) {
        ioe.printStackTrace();
    } catch (SecurityException se) {
        se.printStackTrace();
    }
}

2 Answers 2

1

You didn't show us the code that is sending the integers for multiplication. Do you happen to have a loop in it in which in each iteration you send a pair and receive the result? If so make sure to turn off the Nagle's algorithm.

The Nagle's algorithm tries to overcome the "small-packet problem", i.e. when an application repeatedly emits data in small chunks. This leads to huge overhead, since packet header is often much larger than the data itself. The algorithm essentially combines a number of small outgoing messages and sends them all at once. If not enough data was gathered then the algorithm may still send the message, but only if some timeout has elapsed.

In your case, you were writing small chunks of data into the socket on both the client and the server side. The data weren't immediately transmitted. Rather the socket waited for more data to come (which didn't), so each time a timeout had to elapse.

Sign up to request clarification or add additional context in comments.

1 Comment

After disabling Nagle's algorithm on both the server side and the client side, the persistent socket approach's performance dropped to 0.01 seconds (faster than the new-socket-every-time approach's 0.4s). Enabling it on just the server side or just the client side exclusively reduced the run-time to around 4s, and enabling on both reduced it to 0.01. Could you expand on what was occurring/what Nagle's algorithm is?
0

Actually, the only difference between these 2 pieces of code is NOT how they handle incoming connections (by having one persistent socket or not), the difference is that in the one that you call "persistent", 100 pairs of numbers are multiplied, whereas in the other one, only 1 pair of number is multiplied then returned. This could explain the difference in time.

2 Comments

In both programs one hundred operations are performed. Listen() is called one hundred times for the approach that doesn't have the for loop inside listen.
Understood. First thing I would do is remove the initialisation of 'in' and 'out' outside of the for loop. Doesn't answer your question though.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.