When parsing some HTML using BeautifulSoup or PyQuery, they will use a parser like lxml or html5lib. Let's say I've a file containing the following
<span> é and ’ </span>
In my environnement they seems incorrectly encoded, using PyQuery:
>>> doc = pq(filename=PATH, parser="xml")
>>> doc.text()
'é and â\u20ac\u2122'
>>> doc = pq(filename=PATH, parser="html")
>>> doc.text()
'Ã\x83© and ââ\x82¬â\x84¢'
>>> doc = pq(filename=PATH, parser="soup")
>>> doc.text()
'é and â\u20ac\u2122'
>>> doc = pq(filename=PATH, parser="html5")
>>> doc.text()
'é and â\u20ac\u2122'
Beyond the fact that the encoding seems incorrect, one of the main problem is that doc.text() returns an instance of str instead of bytes which isn't a normal thing according to that question I asked yesterday.
Also, passing the argument encoding='utf-8' to PyQuery seems useless, I tried 'latin1' nothing change. I also tried to add some meta data because I read that lxml read them to figure out what encoding to use but it doesn't change anything:
<!DOCTYPE html>
<html lang="fr" dir="ltr">
<head>
<meta http-equiv="content-type" content="text/html;charset=latin1"/>
<span> é and ’ </span>
</head>
</html>
If I use lxml directly it seems a bit different
>>> from lxml import etree
>>> tree = etree.parse(PATH)
>>> tree.docinfo.encoding
'UTF-8'
>>> result = etree.tostring(tree.getroot(), pretty_print=False)
>>> result
b'<span> é and ’ </span>'
>>> import html
>>> html.unescape(result.decode('utf-8'))
'<span> é and \u2019 </span>\n'
Erf, It drives me a bit crazy, your help would be appreciated