0

I have a need to create multiple servers (at least 3) doing different tasks and ideally I need a central server to be able to communicate with them. I have had a look at sockets and they seem beyond me. I am more comfortable with using a database that each of the servers can write to and they can exchange information between themselves, setting appropriate flags when a message has been actioned.

This is for a project that will be run on the Raspberry Pi and I want to keep it as lightweight as possible.

I am thinking of using sqlite3 but am unsure if this is a suitable form of communication between apps/processes. Can the python script "listen" for changes to the database or do they need to block in a loop until a message is found?

2 Answers 2

2

SQLite is quite suitable for sharing information between processes, although I suspect its locking is quite coarse-grained so if the processes make frequent updates (many a second) then you might find that this affects performance. However, if your processes are running on different machines then you'd need to arrange some shared storage for them. Since SQLite relies on file locking, shared filesystems like NFS may have issues - see their FAQ for details.

MySQL and PostgreSQL are both sensible options which allow connections over a network and should make it easier for multiple machines to access the database at once, although they require considerably more setup than SQLite.

All that said, it sounds like what you really want to achieve is for processes to be woken up by each other, and this sort of thing is harder with a database. You generally end up having to poll on a particular value at frequent intervals, and that's quite an inefficient way of doing things. If you're after a static data store, databases are great, but they're not really a replacement for proper IPC. You might also consider using UDP instead of TCP if you don't need to send data, more of just a "wake up call". Bear in mind that UDP makes no guarantees about reliability, however, and you'll probably find many more tutorials and documentation about TCP sockets than UDP ones.

If you're not comfortable using raw sockets, have you looked at 0MQ? It's a message-passing system which can run over, among other things, sockets. It has Python bindings and it performs really well. Perhaps that abstraction might work better for you than dealing with raw sockets?

As a final note, socket programming isn't really all that tricky once you get your head around the concepts, but it does have quite a learning curve until you're comfortable with what's going on. If you want to give it a try, I suggest starting with Python's Socket Programming HOWTO.

The main complexity you're going to have is dealing with multiple connections at once. If you're happy using threads, you can spawn a thread per connection (i.e. one thread per other machine), or you can keep things in the same thread and wait for something to happen on multiple connections with the functionality of the select module.

Sign up to request clarification or add additional context in comments.

Comments

1

if the information being shared does not require persistence and your data is not highly structured, then you might consider using redis. it's a key/value store that keeps everything in memory(although you're probably i/o bound) and is super easy to use.

Another option is using zero-rpc. it is built on top of 0MQ and makes it easy to communicate between services. it abstracts sockets and serialization for you, amongst other things

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.