4

I want to read a file of urls, curl each url and only get the first line which contains the HTTP code. I am running under Windows 10 inside Cmder.

#!/bin/bash
input="urls.csv"
truncate -s 0 dest.csv
while IFS= read -r var
do
    result= `curl -I ${var%$'\r'} | grep HTTP $result`
echo "$var $result" >> dest.csv
done < "$input"

however the output file is empty, thank you

2
  • 1
    Are you getting a pile of errors as well? Commented Mar 22, 2019 at 21:16
  • @thatotherguy no i don't see error, I am running inside Cmder under Windows 10 Commented Mar 22, 2019 at 21:27

2 Answers 2

4

Assuming urls.csv is just a simple list of URLs and you're working on a linux system (or any system which has /dev/null), following command will send HEAD requests to each URL and output them next to HTTP response codes.

sed 's/^/url = /; s/\r\?$/\n-o \/dev\/null/' urls.csv |
curl -s -K- -w '%{http_code} %{url_effective}\n' -I >outfile

see curl man page for further information.

Sign up to request clarification or add additional context in comments.

1 Comment

Hi thank you, I didn't manage to make it run in Cmder Windows 10, but it ran ok on Linux, now can you tell me how to output the HTTP code + the url in a file? thanks
1

In reply to your query on outputting the HTTP code, this is given in the second line of the header. You may get this by:

export IFS=$; # This is required to stop everything coming out on one line!
curl -i <domain> | head -n 2

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.