html parser?

C

Christoph Söllner

Hi *,

is there a html parser available, which could i.e. extract all links from a
given text like that:
"""
<a href="foo.php?param1=test">BAR<img src="none.gif"></a>
<a href="foo2.php?param1=test&param2=test">BAR2</a>
"""

and return a set of dicts like that:
"""
{
['foo.php','BAR','param1','test'],
['foo2.php','BAR2','param1','test','param2','test']
}
"""

thanks,
Chris
 
L

Laszlo Zsolt Nagy

Christoph said:
Hi *,

is there a html parser available, which could i.e. extract all links from a
given text like that:
"""
<a href="foo.php?param1=test">BAR<img src="none.gif"></a>
<a href="foo2.php?param1=test&param2=test">BAR2</a>
"""

and return a set of dicts like that:
"""
{
['foo.php','BAR','param1','test'],
['foo2.php','BAR2','param1','test','param2','test']
}
"""

thanks,
Chris
I asked the same question a week ago, and the answer I got was a really
beautiful one. :)

http://www.crummy.com/software/BeautifulSoup/

Les
 
T

Thorsten Kampe

* Christoph Söllner (2005-10-18 12:20 +0100)
right, that's what I was looking for. Thanks very much.

For simple things like that "BeautifulSoup" might be overkill.

import formatter, \
htmllib, \
urllib

url = 'http://python.org'

htmlp = htmllib.HTMLParser(formatter.NullFormatter())
htmlp.feed(urllib.urlopen(url).read())
htmlp.close()

print htmlp.anchorlist

and then use urlparse to parse the links/urls...
 
P

Paul Boddie

Thorsten said:
For simple things like that "BeautifulSoup" might be overkill.

[HTMLParser example]

I've used SGMLParser with some success before, although the SAX-style
processing is objectionable to many people. One alternative is to use
libxml2dom [1] and to parse documents as HTML:

import libxml2dom, urllib
url = 'http://www.python.org'
doc = libxml2dom.parse(urllib.urlopen(url), html=1)
anchors = doc.xpath("//a")

Currently, the parseURI function in libxml2dom doesn't do HTML parsing,
mostly because I haven't yet figured out what combination of parsing
options have to be set to make it happen, but a combination of urllib
and libxml2dom should perform adequately. In the above example, you'd
process the nodes in the anchors list to get the desired results.

Paul

[1] http://www.python.org/pypi/libxml2dom
 
L

Laszlo Zsolt Nagy

Thorsten said:
* Christoph Söllner (2005-10-18 12:20 +0100)



For simple things like that "BeautifulSoup" might be overkill.

import formatter, \
htmllib, \
urllib

url = 'http://python.org'

htmlp = htmllib.HTMLParser(formatter.NullFormatter())
The problem with HTMLParser is that does not handle unclosed tags and/or
attirbutes given with invalid syntax.
Unfortunately, many sites on the internet use malformed HTML pages. You
are right, BeautifulSoup is an overkill
(it is rather slow) but I'm affraid this is the only fault-tolerant
solution.

Les
 
L

leonardr

To extract links without the overhead of Beautiful Soup, one option is
to copy what Beautiful Soup does, and write a SGMLParser subclass that
only looks at 'a' tags. In general I think writing SGMLParser
subclasses is a big pain (which is why I wrote Beautiful Soup), but
since you only care about one type of tag it's not so difficult:

from sgmllib import SGMLParser

class LinkParser(SGMLParser):
def __init__(self):
SGMLParser.__init__(self)
self.links = []
self.currentLink = None
self.currentLinkText = []

def start_a(self, attrs):
#If we encounter a nested a tag, end the current a tag and
#start a new one.
if self.currentLink != None:
self.end_a()
for attr, value in attrs:
if attr == 'href':
self.currentLink = value
break
if self.currentLink == None:
self.currentLink = ''

def handle_data(self, data):
if self.currentLink != None:
self.currentLinkText.append(data)

def end_a(self):
self.links.append([self.currentLink,
"".join(self.currentLinkText)])
self.currentLink = None
self.currentLinkText = []

Since this ignores any tags other than 'a', it will strip out all tags
from the text within an 'a' tag (this might be what you want, since
your example shows an 'img' tag being stripped out). It will also close
one 'a' tag when it finds another, rather than attempting to nest them.

<a href="foo.php">This text has <b>embedded HTML tags</b></a>
=>
[['foo.php', 'This text has embedded HTML tags']]

<a href="foo.php">This text has <a name="anchor">an embedded
anchor</a>.
=>
[['foo.php', 'This text has '], ['', 'an embedded anchor']]

Alternatively, you can subclass a Beautiful Soup class to ignore all
tags except for 'a' tags and the tags that they contain. This will give
you the whole Beautiful Soup API, but it'll be faster because Beautiful
Soup will only build a model of the parts of your document within 'a'
tags. The following code seems to work (and it looks like a good
candidate for inclusion in the core package).

from BeautifulSoup import BeautifulStoneSoup

class StrainedStoneSoup(BeautifulStoneSoup):
def __init__(self, interestingTags=["a"], *args):
args = list(args)
args.insert(0, self)
self.interestingMap = {}
for tag in interestingTags:
self.interestingMap[tag] = True
apply(BeautifulStoneSoup.__init__, args)

def unknown_starttag(self, name, attrs, selfClosing=0):
if self.interestingMap.get(name) or len(self.tagStack) > 1:
BeautifulStoneSoup.unknown_starttag(self, name, attrs,
selfClosing)

def unknown_endtag(self, name):
if len(self.tagStack) > 1:
BeautifulStoneSoup.unknown_endtag(self, name)

def handle_data(self, data):
if len(self.tagStack) > 1:
BeautifulStoneSoup.handle_data(self, data)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,057
Latest member
KetoBeezACVGummies

Latest Threads

Top