How to extract from html page links for javascript, css and img tags ? Do I need to use regular expression or there is already some lightweight library for html parsing ?
-
For parsing HTML with regexp, please see the first answer to this question: stackoverflow.com/questions/1732348/… :Dpajton– pajton2011-06-26 22:11:38 +00:00Commented Jun 26, 2011 at 22:11
-
You certainly can use a regex to EXTRACT links from a HTML page as long as your code doesn't plan to first PARSE the page. Because regexes can't parse HTML. But in my opinion, there is no more need to parse a HTML page to find and extract some strings from it than from any other non-HTML pageeyquem– eyquem2011-06-26 22:51:14 +00:00Commented Jun 26, 2011 at 22:51
3 Answers
HTML5Lib in combination with lxml is what I like to use extract data from HTML documents. It recovers from errors in a similar way to modern browsers so it makes broken html easier to work with.
If you actually want to run js code in web pages (say the link is calculated via a function), you should consider looking at the webkit and jswebkit packages which will let you run javascript in a headless webkit window that can get you dynamically generated content for your python parser to examine.
It's really not hard at all to run js in python via webkit, though expect memory usage on par with running a webkit browser.
Comments
BeautifulSoup will do the trick.
import urllib
from BeautifulSoup import BeautifulSoup
sock = urllib.urlopen("http://stackoverflow.com")
soup = BeautifulSoup(sock.read())
sock.close()
img = soup.findAll("img")
script = soup.findAll("script", {"type" : "text/javascript"})
css = soup.findAll("link", {"rel" : "stylesheet"})
Comments
HTML is not a language which is parsable by regular expressions. SO don't even try. It will break.
What I typically use is Beautiful Soup which is a parser library especially build for gathering information from potentially invalid markup, exactly like the stuff you will find out there.