1

I run a bash script, and looping as much line in text file. to cURL the site listed in the txt file.

here is my script :

SECRET_KEY='zuhahaha'
FILE_NAME=""

case "$1" in
        "sma")     
            FILE_NAME="sma.txt"
        ;;
        "smk")      
            FILE_NAME="smk.txt"
        ;;
        "smp")      
            FILE_NAME="smp.txt"
        ;;
        "sd")      
            FILE_NAME="sd.txt"
        ;;
     *)
        echo "not in case !"
        ;;
esac

function save_log()
{
    printf '%s\n' \
    "Header Code    : $1" \
    "Executed at    : $(date)" \
    "Response Body  : $2" \
    "====================================================================================================="$'\r\n\n'  >> output.log
}

while IFS= read -r line; 
    do 
        HTTP_RESPONSE=$(curl -L -s -w "HTTPSTATUS:%{http_code}\\n" -H "X-Gitlab-Event: Push Hook" -H 'X-Gitlab-Token: '$SECRET_KEY --insecure $line 2>&1) &
        HTTP_BODY=$(echo $HTTP_RESPONSE | sed -e 's/HTTPSTATUS\:.*//g') &
        HTTP_STATUS=$(echo $HTTP_RESPONSE | tr -d '\n' | sed -e 's/.*HTTPSTATUS://') &

        save_log "$HTTP_STATUS" "$HTTP_BODY" &
done < $FILE_NAME

how i can run threading or make the loop fast in bash ?

3
  • I doubt your attempt at forking the processes inside the loop work, because you need to wait for the output and put it in the variable. Have you tried extracting the inside of your while-loop into a new bash script, and then run that in another process? Commented Jun 29, 2019 at 2:44
  • how i can extract the inside of my while-loop into a new bash script ? Commented Jun 29, 2019 at 2:51
  • You generally do multiprocessing with & after the command. However, I'm not sure that that's what you want here. Commented Jun 29, 2019 at 2:56

2 Answers 2

3

You should be able to do this relatively easily. Don't try to background each command, but instead put the body of your while loop into a subshell and background that. That way, your commands (which clearly depend on each other) run sequentially, but all the lines in the file can be process in parallel.

while IFS= read -r line; 
    do
       (
        HTTP_RESPONSE=$(curl -L -s -w "HTTPSTATUS:%{http_code}\\n" -H "X-Gitlab-Event: Push Hook" -H 'X-Gitlab-Token: '$SECRET_KEY --insecure $line 2>&1)
        HTTP_BODY=$(echo $HTTP_RESPONSE | sed -e 's/HTTPSTATUS\:.*//g')
        HTTP_STATUS=$(echo $HTTP_RESPONSE | tr -d '\n' | sed -e 's/.*HTTPSTATUS://')

        save_log "$HTTP_STATUS" "$HTTP_BODY" ) &
done < $FILE_NAME
Sign up to request clarification or add additional context in comments.

Comments

1

My favourite was to do this is generate a file that lists all the commands you wish to perform. If you have a script that performs your operations create a file like:

$ cat commands.txt
echo 1
echo 2
echo $[12+3]
....

For example this could be hundreds of commands long.

To execute each line in parallel, use the parallel command with, say, at most 3 jobs running in parallel at any time.

$ cat commands.txt | parallel -j
1
2
15

For your curl example you could generate thousands of curl commands, execute them say 30 in parallel at any one time.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.