I am attempting to create a python script that can drive multiple MPI simulations (F90 executables, though it doesn't matter). Each of these MPI simulations use 2 processors. Lets say I want to have three of these MPI simulations running simultaneously. If I run these 3 simulation from the command line in 3 separate terminals, without python, they each get their own 2 processors, and run as though they are the only things that exist in the world.
My current implementation does not appear to be doing this. It is clear from tracking the MPI simulations that there is competition amongst the MPI simulations. Here is my current procedure
import subprocess
import multiprocessing as mp
def execute(inputs, output):
do_stuff_with_inputs()
subprocess.call('mpiexec -np 2 my_executable.x', shell=True)
results = post_process_stuff()
output.put(results)
output = mp.Queue()
processes = []
for i in xrange(3):
process.append(mp.Process(target=execute, args=args)))
for p in process:
p.start()
for p in process:
p.join()
results = [output.get() for p in process]
What I would like to do is be more explicit with the procedure, somehow 'creating' processor space in python so that the executable call has its own dedicated number of processors.
multiprocessing.Pool.argsand seems to just run 2executesubprocesses. Also, what is your evidence that "It is clear ... that there is competition amongst the MPI simulations"?sub_comm = MPI.COMM_SELF.Spawn('slavef90', args=[], maxprocs=1). See my answer stackoverflow.com/questions/41699585/… You can even communicate to and from the spawned processes, but that requires altering the fortran code accordingly.my_executable.xprocess input and produce output? What dodo_stuff_with_inputsandpost_process_stuffdo? Regarding the subprocesses, I was confused because you said "Lets say I want to have three of these MPI simulations running simultaneously." but your given code only seems to run 2.