I'm trying to write a basic web crawler in Python. The trouble I have is parsing the page to extract url's. I've both tried BeautifulSoup and regex however I cannot achieve an efficient solution.
As an example: I'm trying to extract all the member urls in Facebook's Github page. (https://github.com/facebook?tab=members). The code I've written extracts member URL's;
def getMembers(url):
text = urllib2.urlopen(url).read();
soup = BeautifulSoup(text);
memberList = []
#Retrieve every user from the company
#url = "https://github.com/facebook?tab=members"
data = soup.findAll('ul',attrs={'class':'members-list'});
for div in data:
links = div.findAll('li')
for link in links:
memberList.append("https://github.com" + str(link.a['href']))
return memberList
However this takes quite a while to parse and I was wondering if I could do it more efficiently, since crawling process is too long.
python -mcProfile your_script.pygithub might be responding sloowly.