BeautifulSoup

S

Steve Young

I tried using BeautifulSoup to make changes to the url
links on html pages, but when the page was displayed,
it was garbled up and didn't look right (even when I
didn't actually change anything on the page yet). I
ran these steps in python to see what was up:
11889

I have version 2.1 of BeautifulSoup. It seems that
other ppl have used BeautifulSoup and it works fine
for them so I'm not sure what I'm doing wrong. Any
help would be appreciated, thanks.

-Steve



____________________________________________________
Start your day with Yahoo! - make it your home page
http://www.yahoo.com/r/hs
 
P

Paul McGuire

Steve -

Is there a chance you could post a before and after example, so we can
see just what you are trying to do instead of talking conceptually all
around it and making us guess? If you are just doing some spot
translations of specific values in an HTML file, you can probably get
away with a simple (but readable) program using pyparsing.

-- Paul
 
P

Paul McGuire

Here's a pyparsing program that reads my personal web page, and spits
out HTML with all of the HREF's reversed.

-- Paul
(Download pyparsing at http://pyparsing.sourceforge.net.)



from pyparsing import Literal, quotedString
import urllib

LT = Literal("<")
GT = Literal(">")
EQUALS = Literal("=")
htmlAnchor = LT + "A" + "HREF" + EQUALS +
quotedString.setResultsName("href") + GT

def convertHREF(s,l,toks):
# do HREF conversion here - for demonstration, we will just reverse
them
print toks.href
return "<A HREF=%s>" % toks.href[::-1]

htmlAnchor.setParseAction( convertHREF )

inputURL = "http://www.geocities.com/ptmcg"
inputPage = urllib.urlopen(inputURL)
inputHTML = inputPage.read()
inputPage.close()

print htmlAnchor.transformString( inputHTML )
 
M

Mike Meyer

Paul McGuire said:
Here's a pyparsing program that reads my personal web page, and spits
out HTML with all of the HREF's reversed.

Parsing HTML isn't easy, which makes me wonder how good this solution
really is. Not meant as a comment on the quality of this code or
PyParsing, but as curiosity from someone who does a lot of [X}HTML
herding.
-- Paul
(Download pyparsing at http://pyparsing.sourceforge.net.)

If it were in the ports tree, I'd have grabbed it and tried it
myself. But it isn't, so I'm going to be lazy and ask. If PyParsing
really makes dealing with HTML this easy, I may package it as a port
myself.
from pyparsing import Literal, quotedString
import urllib

LT = Literal("<")
GT = Literal(">")
EQUALS = Literal("=")
htmlAnchor = LT + "A" + "HREF" + EQUALS +
quotedString.setResultsName("href") + GT

def convertHREF(s,l,toks):
# do HREF conversion here - for demonstration, we will just reverse
them
print toks.href
return "<A HREF=%s>" % toks.href[::-1]

htmlAnchor.setParseAction( convertHREF )

inputURL = "http://www.geocities.com/ptmcg"
inputPage = urllib.urlopen(inputURL)
inputHTML = inputPage.read()
inputPage.close()

print htmlAnchor.transformString( inputHTML )

How well does it deal with other attributes in front of the href, like
<A onClick="..." href="...">?

How about if my HTML has things that look like HTML in attributes,
like <TAG ATTRIBUTE="stuff<A HREF=stuff">?

Thanks,
<mike
 
P

Paul McGuire

Mike -

Thanks for asking. Typically I hang back from these discussions of
parsing HTML or XML (*especially* XML), since there are already a
number of parsers out there that can handle the full language syntax.
But it seems that many people trying to parse HTML aren't interested in
fully parsing an HTML page, so much as they are trying to match some
tag pattern, to extract or modify the embedded data. In these cases,
fully comprehending HTML syntax is rarely required.

In this particular instance, the OP had started another thread in which
he was trying to extract some HTML content using regexp's, and this
didn't seem to be converging to a workable solution. When he finally
revealed that what he was trying to do was extract and modify the URL's
in a web pages HTML source, this seemed like a tractable problem for a
quick pyparsing program. In the interests of keeping things simple, I
admittedly provided a limited solution. As you mentioned, no
additional attributes are handled by this code. But many HTML scrapers
are able to make simplifying assumptions about what HTML features can
be expected, and I did not want to spend a lot of time solving problems
that may never come up.

So you asked some good questions, let me try to give some reasonable
answers, or at least responses:

1. "If it were in the ports tree, I'd have grabbed it and tried it
myself."
By "ports tree", I assume you mean some directory of your Linux
distribution. I'm sure my Linux ignorance is showing here, most of my
work occurs on Windows systems. I've had pyparsing available on SF for
over a year and a half, and I do know that it has been incorporated (by
others) into a couple of Linux distros, including Debian, ubuntu,
gentoo, and Fedora. If you are interested in doing a port to another
Linux, that would be great! But I was hoping that hosting pyparsing on
SF would be easy enough for most people to be able to get at it.

2. "How well does it deal with other attributes in front of the href,
like <A onClick="..." href="...">?"
*This* version doesn't deal with other attributes at all, in the
interests of simplicity. However, pyparsing includes a helper method,
makeHTMLTags(), that *does* support arbitrary attributes within an
opening HTML tag. It is called like:

anchorStart,anchorEnd = makeHTMLTags("A")

makeHTMLTags returns a pyparsing subexpression that *does* comprehend
attributes, as well as opening tags that include their own closing '/'
(indicating an empty tag body). Tag attributes are accessible by name
in the returned results tokens, without requiring setResultsName()
calls (as in the example).

3. "How about if my HTML has things that look like HTML in attributes,
like <TAG ATTRIBUTE="stuff<A HREF=stuff">?"
Well, again, the simple example wont be able to tell the difference,
and it would process the ATTRIBUTE string as a real tag. To address
this, we would expand our statement to process quoted strings
explicitly, and separately from the htmlAnchor, as in:

htmlPatterns = quotedString | htmlAnchor

and then use htmlPatterns for the transformString call:

htmlPatterns.transformString( inputHTML )

You didn't ask, but one feature that is easy to handle is comments.
pyparsing includes some common comment syntaxes, such as cStyleComment
and htmlComment. To ignore them, one simply calls ignore() on the root
pyparsing node. In the simple example, this would look like:

htmlPatterns.ignore( htmlComment )

By adding this single statement, all HTML comments would be ignored.


Writing a full HTML parser with pyparsing would be tedious, and not a
great way to spend your time, given the availability of other parsing
tools. But for simple scraping and extracting, it can be a very
efficient way to go.

-- Paul
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads


Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,483
Members
44,901
Latest member
Noble71S45

Latest Threads

Top