I have 2 processes running in different machines which are communicating via TCP sockets.
Both processes have code that acts both as a server and as a client.
I.e. ProcessA has opened a server socket that binds at portX and ProcessB has opened a server socket bind at portY.
ProcessA opens a client socket to connect with ProcessB and start sending messages as a client
and receiving responses (over the same tcp connection of course).
ProcessB once it receives a message and processes it, it sends the response, but also could send a message over the second tcp connection, i.e. where ProcessB has opened a client socket to portX of ProcessA.
So the flow of messages are over 2 different tcp connections.
My problem is the following: Taking as granted that this "architecture" can not change and must stay as is:
I have the problem that intermittently, the messages send from ProcessB to ProcessA over the tcp connection that ProcessB has opened the client socket, arrive at processA before the messages send as responses from ProcessB to ProcessA over the tcp connection that ProcessA has connected as a client socket.
I.e. Both flows occur
(1)
ProcessA ---->(msg)----> ProcessB(PortY) (TCP1)
ProcessB does processing
ProcessB(portY)--->(response)----->ProcessA (TCP1)
ProcessB--->(msg)----->ProcessA(portX) (TCP2)
(2)
ProcessA ---->(msg)----> ProcessB(PortY) (TCP1)
ProcessB does processing
ProcessB--->(msg)----->ProcessA(portX) (TCP2)
ProcessB(portY)--->(response)----->ProcessA (TCP1)
EDIT (after ejp request)
How can I enforce/make sure that ProcessB does not send a msg over the connection that ProcessB has a client socket open to server portX of ProcessA, before the message send as a reply from server portY of ProcessB arrives at processA? I.e. to have only flow (1) of the above.
Note that processB is multithreaded and the processing is non-trivial.
UPDATE: May be it is my misconception, but when a process sends data over socket, and control is returned to application, this does not mean that the receiving side has received the data. So if a process sends data over 2 sockets, is there a race condition by OS?
UPDATE2
After answer I got from Vijay Mathew:
If I did a locking as suggested, is there a guarantee that OS (i.e. IP layer) will send the data in order? I.e. finish one transmission, then send the next? Or I would they be multiplexed and have same issue?
Thanks