And if it's really XHTML/XML, why not just use an XML parser? ;-)
I'm using BeautifulSoup and it appears that it doesn't. I'd also like to
know the answer to this for when I do screenscraping with regular
expressions
Anyway, on the subject of XML parsers, here's something to try out:
import libxml2dom
import urllib
f = urllib.urlopen("
http://www.sweden.se/") # some Swedish site!
s = f.read()
f.close()
d = libxml2dom.parseString(s, html=1)
Here, we assume that the site isn't well-formed XML and must be treated
as HTML, which libxml2 seems to be fairly good at doing. Then...
for a in d.xpath("//a"):
print repr(a.getAttribute("href")), \
repr(a.getAttribute("title")), \
repr(a.nodeValue)
Here, we print out some of the hyperlinks in the page using repr to
show what the strings look like (and in a way that doesn't require you
to encode them for your terminal). On the above Swedish site, you'll
see some things like this:
u'Fran\xe7ais'
What's interesting is that in some cases such strings may have been
encoded using entities (such as in the title attributes), whereas in
other cases they may have been encoded using UTF-8 byte sequences (such
as in the link texts). The nice thing is that libxml2 just works it out
on your behalf.
So there's no compelling need for regular expressions, but I'm sure
Fredrik will offer some alternative suggestions... and possibly some
good Swedish links, too. ;-)
Paul