What is the best way to implement multithreading to make a web scrape faster? Would using Pool be a good solution - if so where in my code would I implement it?
import requests
from multiprocessing import Pool
with open('testing.txt', 'w') as outfile:
results = []
for number in (4,8,5,7,3,10):
url = requests.get('https://www.google.com/' + str(number))
response =(url)
results.append(response.text)
print(results)
outfile.write("\n".join(results))
scrapywhich is a great toolkit for scraping.