html parser?

Discussion in 'Python' started by Christoph Söllner, Oct 18, 2005.

  1. Hi *,

    is there a html parser available, which could i.e. extract all links from a
    given text like that:
    """
    <a href="foo.php?param1=test">BAR<img src="none.gif"></a>
    <a href="foo2.php?param1=test&param2=test">BAR2</a>
    """

    and return a set of dicts like that:
    """
    {
    ['foo.php','BAR','param1','test'],
    ['foo2.php','BAR2','param1','test','param2','test']
    }
    """

    thanks,
    Chris
    Christoph Söllner, Oct 18, 2005
    #1
    1. Advertising

  2. Christoph Söllner wrote:

    >Hi *,
    >
    >is there a html parser available, which could i.e. extract all links from a
    >given text like that:
    >"""
    ><a href="foo.php?param1=test">BAR<img src="none.gif"></a>
    ><a href="foo2.php?param1=test&param2=test">BAR2</a>
    >"""
    >
    >and return a set of dicts like that:
    >"""
    >{
    > ['foo.php','BAR','param1','test'],
    > ['foo2.php','BAR2','param1','test','param2','test']
    >}
    >"""
    >
    >thanks,
    >Chris
    >
    >

    I asked the same question a week ago, and the answer I got was a really
    beautiful one. :)

    http://www.crummy.com/software/BeautifulSoup/

    Les
    Laszlo Zsolt Nagy, Oct 18, 2005
    #2
    1. Advertising

  3. right, that's what I was looking for. Thanks very much.
    Christoph Söllner, Oct 18, 2005
    #3
  4. * Christoph Söllner (2005-10-18 12:20 +0100)
    > right, that's what I was looking for. Thanks very much.


    For simple things like that "BeautifulSoup" might be overkill.

    import formatter, \
    htmllib, \
    urllib

    url = 'http://python.org'

    htmlp = htmllib.HTMLParser(formatter.NullFormatter())
    htmlp.feed(urllib.urlopen(url).read())
    htmlp.close()

    print htmlp.anchorlist

    and then use urlparse to parse the links/urls...
    Thorsten Kampe, Oct 18, 2005
    #4
  5. Christoph Söllner

    Paul Boddie Guest

    Thorsten Kampe wrote:
    > For simple things like that "BeautifulSoup" might be overkill.


    [HTMLParser example]

    I've used SGMLParser with some success before, although the SAX-style
    processing is objectionable to many people. One alternative is to use
    libxml2dom [1] and to parse documents as HTML:

    import libxml2dom, urllib
    url = 'http://www.python.org'
    doc = libxml2dom.parse(urllib.urlopen(url), html=1)
    anchors = doc.xpath("//a")

    Currently, the parseURI function in libxml2dom doesn't do HTML parsing,
    mostly because I haven't yet figured out what combination of parsing
    options have to be set to make it happen, but a combination of urllib
    and libxml2dom should perform adequately. In the above example, you'd
    process the nodes in the anchors list to get the desired results.

    Paul

    [1] http://www.python.org/pypi/libxml2dom
    Paul Boddie, Oct 18, 2005
    #5
  6. Thorsten Kampe wrote:

    >* Christoph Söllner (2005-10-18 12:20 +0100)
    >
    >
    >>right, that's what I was looking for. Thanks very much.
    >>
    >>

    >
    >For simple things like that "BeautifulSoup" might be overkill.
    >
    >import formatter, \
    > htmllib, \
    > urllib
    >
    >url = 'http://python.org'
    >
    >htmlp = htmllib.HTMLParser(formatter.NullFormatter())
    >
    >

    The problem with HTMLParser is that does not handle unclosed tags and/or
    attirbutes given with invalid syntax.
    Unfortunately, many sites on the internet use malformed HTML pages. You
    are right, BeautifulSoup is an overkill
    (it is rather slow) but I'm affraid this is the only fault-tolerant
    solution.

    Les
    Laszlo Zsolt Nagy, Oct 19, 2005
    #6
  7. Christoph Söllner

    leonardr Guest

    To extract links without the overhead of Beautiful Soup, one option is
    to copy what Beautiful Soup does, and write a SGMLParser subclass that
    only looks at 'a' tags. In general I think writing SGMLParser
    subclasses is a big pain (which is why I wrote Beautiful Soup), but
    since you only care about one type of tag it's not so difficult:

    from sgmllib import SGMLParser

    class LinkParser(SGMLParser):
    def __init__(self):
    SGMLParser.__init__(self)
    self.links = []
    self.currentLink = None
    self.currentLinkText = []

    def start_a(self, attrs):
    #If we encounter a nested a tag, end the current a tag and
    #start a new one.
    if self.currentLink != None:
    self.end_a()
    for attr, value in attrs:
    if attr == 'href':
    self.currentLink = value
    break
    if self.currentLink == None:
    self.currentLink = ''

    def handle_data(self, data):
    if self.currentLink != None:
    self.currentLinkText.append(data)

    def end_a(self):
    self.links.append([self.currentLink,
    "".join(self.currentLinkText)])
    self.currentLink = None
    self.currentLinkText = []

    Since this ignores any tags other than 'a', it will strip out all tags
    from the text within an 'a' tag (this might be what you want, since
    your example shows an 'img' tag being stripped out). It will also close
    one 'a' tag when it finds another, rather than attempting to nest them.

    <a href="foo.php">This text has <b>embedded HTML tags</b></a>
    =>
    [['foo.php', 'This text has embedded HTML tags']]

    <a href="foo.php">This text has <a name="anchor">an embedded
    anchor</a>.
    =>
    [['foo.php', 'This text has '], ['', 'an embedded anchor']]

    Alternatively, you can subclass a Beautiful Soup class to ignore all
    tags except for 'a' tags and the tags that they contain. This will give
    you the whole Beautiful Soup API, but it'll be faster because Beautiful
    Soup will only build a model of the parts of your document within 'a'
    tags. The following code seems to work (and it looks like a good
    candidate for inclusion in the core package).

    from BeautifulSoup import BeautifulStoneSoup

    class StrainedStoneSoup(BeautifulStoneSoup):
    def __init__(self, interestingTags=["a"], *args):
    args = list(args)
    args.insert(0, self)
    self.interestingMap = {}
    for tag in interestingTags:
    self.interestingMap[tag] = True
    apply(BeautifulStoneSoup.__init__, args)

    def unknown_starttag(self, name, attrs, selfClosing=0):
    if self.interestingMap.get(name) or len(self.tagStack) > 1:
    BeautifulStoneSoup.unknown_starttag(self, name, attrs,
    selfClosing)

    def unknown_endtag(self, name):
    if len(self.tagStack) > 1:
    BeautifulStoneSoup.unknown_endtag(self, name)

    def handle_data(self, data):
    if len(self.tagStack) > 1:
    BeautifulStoneSoup.handle_data(self, data)
    leonardr, Oct 19, 2005
    #7
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Mitchua
    Replies:
    1
    Views:
    7,032
    Ice Demon
    Jul 15, 2003
  2. ZOCOR

    XML Parser VS HTML Parser

    ZOCOR, Oct 3, 2004, in forum: Java
    Replies:
    11
    Views:
    794
    Paul King
    Oct 5, 2004
  3. David Virgil Hobbs
    Replies:
    2
    Views:
    17,234
  4. Bengt Richter
    Replies:
    0
    Views:
    512
    Bengt Richter
    Aug 3, 2003
  5. Zach Dennis

    HTML-Parser / SGML-Parser

    Zach Dennis, Oct 1, 2003, in forum: Ruby
    Replies:
    5
    Views:
    369
    Bernard Delmée
    Oct 1, 2003
Loading...

Share This Page