Ok so i have the following script to scrape contact details from a list of urls (urls.txt). When i run the following command direct from the terminal i get the correct result
perl saxon-lint.pl --html --xpath 'string-join(//div[2]/div[2]/div[1]/div[2]/div[2])' http://url.com
however when i call the above command from within a script i get a "no such file or directory" result
Here is a copy of my script
#!/bin/bash
while read inputline
do
//Read the url from urls.txt
url="$(echo $inputline)"
//execute saxon-lint to grab the contents of the XPATH from the url within urls.txt
mydata=$("perl saxon-lint.pl --html --xpath 'string-join(//div[2]/div[2]/div[1]/div[2]/div[2])' $url ")
//output the result in myfile.csv
echo "$url,$mydata" >> myfile.csv
//wait 4 seconds
sleep 4
//move to the next url
done <urls.txt
i have tried changing the perl to ./ but get the same result
can anyone advise where i am going wrong with this please
The error that i am receiving is
./script2.pl: line 6: ./saxon-lint.pl --html --xpath 'string-join(//div[2]/div[2]/div[1]/div[2]/div[2])' http://find.icaew.com/listings/view/listing_id/20669/avonhurst-chartered-accountants : No such file or directory
Thanks in advance
\ to escape the/