I'm trying to make my actual crawler Multithread.
When I set the Multithread, several instance of the function will be started.
Exemple :
If my function I use print range(5) and I will have 1,1,2,2,3,3,4,4,5,5 if I have 2 Thread.
How can can I have the result 1,2,3,4,5 in Multithread ?
My actual code is a crawler as you can see under :
import requests
from bs4 import BeautifulSoup
def trade_spider(max_pages):
page = 1
while page <= max_pages:
url = "http://stackoverflow.com/questions?page=" + str(page)
source_code = requests.get(url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text, "html.parser")
for link in soup.findAll('a', {'class': 'question-hyperlink'}):
href = link.get('href')
title = link.string
print(title)
get_single_item_data("http://stackoverflow.com/" + href)
page += 1
def get_single_item_data(item_url):
source_code = requests.get(item_url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text, "html.parser")
res = soup.find('span', {'class': 'vote-count-post '})
print("UpVote : " + res.string)
trade_spider(1)
How can I call trade_spider() in Multithread without duplicate link ?
multiprocessing.Value?