2

I'm making a C client and a Python server. Everytime my C client tries to connect to my Python Server, the server sends an RST. Here is the C client.

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <arpa/inet.h>
#include <sys/socket.h>

int main(){

  char *ip = "127.0.0.1";
  int port = 12345;

  int sock;
  struct sockaddr_in addr;
  socklen_t addr_size;
  char buffer[1024];
  int n;

  sock = socket(AF_INET, SOCK_STREAM, 0);
  printf("%i", sock);
  if (sock < 0){
    perror("[-]Socket error");
    exit(1);
  }
  printf("[+]TCP server socket created.\n");

  memset(&addr, '\0', sizeof(addr));
  addr.sin_family = AF_INET;
  addr.sin_port = port;
  addr.sin_addr.s_addr = inet_addr(ip);


  printf("%i", connect(sock, (struct sockaddr*)&addr, sizeof(addr)));

  if (connect(sock, (struct sockaddr*)&addr, sizeof(addr)) < 0)
  {
    puts("connect error");
    exit(1);
  }

  printf("Connected to the server.\n");

  bzero(buffer, 1024);
  strcpy(buffer, "HELLO, THIS IS CLIENT.");
  printf("Client: %s\n", buffer);
  send(sock, buffer, strlen(buffer), 0);

  bzero(buffer, 1024);
  recv(sock, buffer, sizeof(buffer), 0);
  printf("Server: %s\n", buffer);

  close(sock);
  printf("Disconnected from the server.\n");

  return 0;

}

I get connect error when trying to connect because the server sends an RST Here is the top of my Python Server

import socket, sys, cmd, os, select, queue

serverSocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)  

serverSocket.setblocking(0)

HOST = "10.1.1.228"
PORT = 12345

serverSocket.bind((HOST, PORT))

serverSocket.listen(5)  

inputs = [serverSocket]
outputs = []

message_queues = {}

while inputs:
   # Wait for at least one of the sockets to be ready for processing
   print("waiting for next event")
   readable, writable, exceptional = select.select(inputs, outputs, inputs)
   # Handle inputs
   for s in readable:
      if s is serverSocket:
         # A "readable" server socket is ready to accept a connection
         connection, client_address = s.accept()
         print("new connection from " + str(client_address))
         connection.setblocking(0)
         inputs.append(connection)
         # Give the connection a queue for data we want to send
         message_queues[connection] = queue.Queue()

My C client works with a C server I made and my Python server works with a python client. Not sure why python server keeps sending and RST when the C client is trying to connect.

1 Answer 1

3
addr.sin_port = port;

You're not converting the port from machine order to network order, therefore you're probably connecting to port 14640.

Assuming you made the same mistake in the C server, they are able to rendezvous just fine (since they use the same port, just not the one which actually appears in the source code), but not with Python which does the conversion internally.

You can probably see if that's it by running the C server and using something like lsof to see what port it's listening on.

I also assume the python server nevers sends anything, you just misinterpreted something as an RST. On my machine the client gets a connection refused error.

Sign up to request clarification or add additional context in comments.

5 Comments

A common mistake, and a dumb way they designed sockets back in the day. There is no good reason why they would design it this way.
@user253751 - although the idea of byte order was likely recognized soon after sockets were first being created in 1971, the terms big-endian and little-endian did not exist until 1980. So, given the difference in time that these two ideas were emerging might explain why things were designed as they were. Almost two decades of socket tech had evolved by the time endianess was finally coalescing into a conversation. Any dumbness that exists relating to these concepts lays with the fact that tech has not evolved them beyond the point of requiring questions like this. :)
@ryyker the dumbness is the fact the operating system doesn't call htons for you.
@user253751 the original sin is really that all the sockaddr_* structs are just a big union over a type tag (*_family) and a bunch of bytes. So when you set sin_port you're just setting 2 bytes into that array at offset 0, with no further processing or information. Doing the conversion cleanly would likely have required something like an initialiser or constructor function for each socket family and that's at least a few cycle you lose there so clearly inacceptable to the old graybeards.
@Masklinn nah, the OS has to parse them differently for each type family already. It's not like they just go out on the network as blobs.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.