I use python multiprocessing processes to establish multiple connections to a postgreSQL database via psycopg.
Every process establishes a connection, creates a cursor, fetches an object from a mp.Queue and does some work on the database. If everything works fine, the changes are commited and the connection is closed.
If one of the processes however creates an error (e.g. an ADD COLUMN request fails, because the COLUMN is already present), all the processes seem to stop working.
import psycopg2
import multiprocessing as mp
import Queue
def connect():
C = psycopg2.connect(host = "myhost", user = "myuser", password = "supersafe", port = 62013, database = "db")
cur = C.cursor()
return C,cur
def commit_and_close(C,cur):
C.commit()
cur.close()
C.close()
def commit(C):
C.commit()
def sub(queue):
C,cur = connect()
while not queue.empty():
work_element = queue.get(timeout=1)
#do something with the work element, that might produce an SQL error
commit_and_close(C,cur)
return 0
if __name__ == '__main__':
job_queue = mp.Queue()
#Fill Job_queue
print 'Run'
for i in range(20):
p=mp.Process(target=sub, args=(job_queue))
p.start()
I can see, that processes are still alive (because the job_queue is still full), but no Network traffic / SQL actions are happening. Is it possible, that an SQL error blocks communication from other subprocesses? How can I prevent that happening?