I want to parallelize a job over multiple nodes. Each core should run a specific combination of parameters and then save the result as a file. Using srun to launch an R-script causes all nodes and cores to execute the excat same code. Not using srun will launch the code on only one node, where it then runs in parallel, but doesn't utilize cores on the other nodes.
I tried giving different entries for --nodes=[ ] , --tasks-per-node=[ ] , --cpus-per-task=[ ] , or --ntasks=[ ] and experimented with some options in srun.
On the other hand I tried calling the other nodes from within the R-script.
What I need is a script that distributes the tasks over all cores, while giving them the parameter combinations they should evaluate. At this point I'm not even sure what parts of the problem need to be handled within the bash script and which should be in the executed script.