I have read similar questions about this topic but none of them help me with the following problem:
I have a bash script that looks like this:
#!/bin/bash
for filename in /home/user/Desktop/emak/*.fa; do
mkdir ${filename%.*}
cd ${filename%.*}
mkdir emak
cd ..
done
This script basically does the following:
- Iterate through all files in a directory
- Create a new directory with the name of each file
- Go inside the new file and create a new file called "emak"
The real task does something much computational expensive than create the "emak" file...
I have about thousands of files to iterate through. As each iteration is independent from the previous one, I will like to split it in different processors ( I have 24 cores) so I can do multiples files at the same time.
I read some previous post about running in parallel (using: GNU) but I do not see a clear way to apply it in this case.
thanks
getconf _NPROCESSORS_ONLN-1)) <your script name>