On an individual node I have been able to run the precompiled function update_OSS_internal_compiler in parallel (16 cores) with different inputs specified by $FileCounter. However, I would like to extend this parallel processing to beyond one node to multiple nodes, but I'm not sure how to approach it.
#!/bin/bash
. /u/local/Modules/default/init/modules.sh
module load matlab
export MCR_CACHE_ROOT=$TMPDIR
Macro_Iter=10
ApertNum=121
FullPath=$(pwd)
TempFileFolder=$FullPath/TempFiles
for MacroLoop in $(seq 1 1 $Macro_Iter); do
# WANT TO SSH INTO DIFFERENT NODES AND RUN SAME PROCESS WITH DIFFERENT INPUTS WHILE UPDATING FILECOUNTER AFTER EACH NODE, OR DO SOMETHING SIMILAR
for FileCounter in $(seq 1 1 $ApertNum); do echo $FileCounter; done | xargs -I{} --max-procs 16 bash -c '
{
echo "doing aperture {}"
./update_OSS_internal_compiler {}
} '
done
done
echo "$FullPath/TempFiles/ApertFiles"
./update_OSS_global_compiler
Any help is appreciated.


hostnames=cat $PE_HOSTFILE|awk '{print $1}'to ssh into individual nodes, and run processes in background, then ssh into another node to do the same, etc.