I am creating some scripts to help my database import while using docker. I currently have a directory filled with data, and I want to import as quickly as possibly.
All of the work done is all single threaded, so I wanted to speed things up by passing off multiple jobs at once to each thread on my server.
This is done by this code I've written.
#!/usr/bin/python
import sys
import subprocess
cities = ["1", "2", "3", "4", "5", "6", "7", "8", "9", "10"];
for x in cities:
dockerscript = "docker exec -it worker_1 perl import.pl ./public/%s %s mariadb" % (x,x)
p = subprocess.Popen(dockerscript, shell=True, stderr=subprocess.PIPE)
This works fine if I have more than 10 cores, each gets its own. What I want to do is set it up, so if I have 4 cores, the first 4 iterations of the dockerscript runs, 1 to 4, and 5 to 10 wait.
Once any of the 1 to 4 completes, 5 is started and so on until it is all completed.
I am just having a harder time figuring out how to do this is python
Thanks