What's the best way to write this regular expression?

Discussion in 'Python' started by John Salerno, Mar 6, 2012.

  1. John Salerno

    John Salerno Guest

    I sort of have to work with what the website gives me (as you'll see below), but today I encountered an exception to my RE. Let me just give all the specific information first. The point of my script is to go to the specifiedURL and extract song information from it.

    This is my RE:

    song_pattern = re.compile(r'([0-9]{1,2}:[0-9]{2} [a|p].m.).*?<a.*?>(.*?)</a>.*?<a.*?>(.*?)</a>', re.DOTALL)

    This is how the website is formatted:

    4:25 p.m.
    </div><div class="cmPlaylistContent"><strong><a href="/lsp/t24435/">AP TX SOC CPAS TRF</a></strong><br /><br /></div></li><li ><div class="cmPlaylistTime">

    4:21 p.m.
    </div><div class="cmPlaylistContent"><strong><a href="/lsp/t7672/">No One Else On Earth</a></strong><br /><a href="/lsp/a1924/">Wynonna</a><br /></div></li><li ><div class="cmPlaylistTime">

    4:19 p.m.
    </div><div class="cmPlaylistImage"><img src="http://media.cmgdigital.com/shared/amg/pic200/drp100/p109/p10901ruw7x_r85x85.jpg?998f84231a014ed68123ddb508af9480570dc122" alt="Moe Bandy" class="cmDarkBoxShadow cmPhotoBorderWhite"/></div><div class="cmPlaylistContent"><strong><a href="/lsp/t15101/">It' A Cheating Situation</a></strong><br /><a href="/lsp/a5307/">Moe Bandy</a><br /><span class="sprite iconVoteUp">Votes&nbsp;&nbsp;(1) </span></div></li><li ><div class="cmPlaylistTime">

    4:15 p.m.
    </div><div class="cmPlaylistImage"><img src="http://media.cmgdigital.com/shared/amg/pic200/drp700/p744/p74493d85qy_r85x85.jpg?998f84231a014ed68123ddb508af9480570dc122" alt="Reba McEntire" class="cmDarkBoxShadow cmPhotoBorderWhite"/></div><div class="cmPlaylistContent"><strong><a href="/lsp/t14437/">Somebody Should Leave</a></strong><br /><a href="/lsp/a396/">REBA McENTIRE</a> & <a href="/lsp/a5765/">LINDA DAVIS</a><br /></div></li><li ><div class="cmPlaylistTime">

    There's something of a pattern, although it's not always perfect. The time is listed first, and then the song information in <a> tags. However, in this particular case, you can see that for the 4:25pm entry, "AP TX SOC CPAS TRF" is extracted for the song title, and then the RE skips to the next entry in order to find the next <a> tags, which is actually the name of the next song in the list, instead of being the artist as normal. (Of course, I have no idea what AP TX SOC CPAS TRF is anyway. Usually the website doesn't list commercials or anomalies like that.)

    So my first question is basic: am I even extracting the information properly? It works almost all the time, but because the website is such a mess, I pretty much have to rely on the tags being in the proper places (as they were NOT in this case!).

    The second question is, to fix the above problem, would it be sufficient torewrite my RE so that it has to find all of the specified information, i.e.. a time followed by two <a> entries, BEFORE it moves on to finding the next time? I think that would have caused it to skip the 4:25 entry above, andonly extract entries that have a time followed by two <a> entries (song and artist).

    If this is possible, how do I rewrite it so that it has to match all the conditions without skipping over the next time entry in order to do so?

    Thanks.
    John Salerno, Mar 6, 2012
    #1
    1. Advertising

  2. John Salerno

    Chris Rebert Guest

    On Tue, Mar 6, 2012 at 2:43 PM, John Salerno <> wrote:
    > I sort of have to work with what the website gives me (as you'll see below), but today I encountered an exception to my RE. Let me just give all thespecific information first. The point of my script is to go to the specified URL and extract song information from it.
    >
    > This is my RE:
    >
    > song_pattern = re.compile(r'([0-9]{1,2}:[0-9]{2} [a|p].m.).*?<a.*?>(.*?)</a>.*?<a.*?>(.*?)</a>', re.DOTALL)


    I would advise against using regular expressions to "parse" HTML:
    http://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags

    lxml is a popular choice for parsing HTML in Python: http://lxml.de

    Cheers,
    Chris
    Chris Rebert, Mar 6, 2012
    #2
    1. Advertising

  3. John Salerno

    John Salerno Guest

    On Tuesday, March 6, 2012 4:52:10 PM UTC-6, Chris Rebert wrote:
    > On Tue, Mar 6, 2012 at 2:43 PM, John Salerno <> wrote:
    > > I sort of have to work with what the website gives me (as you'll see below), but today I encountered an exception to my RE. Let me just give all the specific information first. The point of my script is to go to the specified URL and extract song information from it.
    > >
    > > This is my RE:
    > >
    > > song_pattern = re.compile(r'([0-9]{1,2}:[0-9]{2} [a|p].m.).*?<a.*?>(.*?)</a>.*?<a.*?>(.*?)</a>', re.DOTALL)

    >
    > I would advise against using regular expressions to "parse" HTML:
    > http://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags
    >
    > lxml is a popular choice for parsing HTML in Python: http://lxml.de
    >
    > Cheers,
    > Chris


    Thanks, that was an interesting read :)

    Anything that allows me NOT to use REs is welcome news, so I look forward to learning about something new! :)
    John Salerno, Mar 6, 2012
    #3
  4. John Salerno

    John Salerno Guest

    On Tuesday, March 6, 2012 4:52:10 PM UTC-6, Chris Rebert wrote:
    > On Tue, Mar 6, 2012 at 2:43 PM, John Salerno <> wrote:
    > > I sort of have to work with what the website gives me (as you'll see below), but today I encountered an exception to my RE. Let me just give all the specific information first. The point of my script is to go to the specified URL and extract song information from it.
    > >
    > > This is my RE:
    > >
    > > song_pattern = re.compile(r'([0-9]{1,2}:[0-9]{2} [a|p].m.).*?<a.*?>(.*?)</a>.*?<a.*?>(.*?)</a>', re.DOTALL)

    >
    > I would advise against using regular expressions to "parse" HTML:
    > http://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags
    >
    > lxml is a popular choice for parsing HTML in Python: http://lxml.de
    >
    > Cheers,
    > Chris


    Thanks, that was an interesting read :)

    Anything that allows me NOT to use REs is welcome news, so I look forward to learning about something new! :)
    John Salerno, Mar 6, 2012
    #4
  5. John Salerno

    John Salerno Guest

    > Anything that allows me NOT to use REs is welcome news, so I look forward to learning about something new! :)

    I should ask though...are there alternatives already bundled with Python that I could use? Now that you mention it, I remember something called HTMLParser (or something like that) and I have no idea why I never looked into that before I messed with REs.

    Thanks.
    John Salerno, Mar 6, 2012
    #5
  6. John Salerno

    John Salerno Guest

    > Anything that allows me NOT to use REs is welcome news, so I look forward to learning about something new! :)

    I should ask though...are there alternatives already bundled with Python that I could use? Now that you mention it, I remember something called HTMLParser (or something like that) and I have no idea why I never looked into that before I messed with REs.

    Thanks.
    John Salerno, Mar 6, 2012
    #6
  7. John Salerno

    John Salerno Guest

    On Tuesday, March 6, 2012 5:05:39 PM UTC-6, John Salerno wrote:
    > > Anything that allows me NOT to use REs is welcome news, so I look forward to learning about something new! :)

    >
    > I should ask though...are there alternatives already bundled with Python that I could use? Now that you mention it, I remember something called HTMLParser (or something like that) and I have no idea why I never looked into that before I messed with REs.
    >
    > Thanks.


    Also, I just noticed Beautiful Soup, which seems appropriate. I suppose any will do, but knowing the pros and cons would help with a decision.
    John Salerno, Mar 6, 2012
    #7
  8. John Salerno

    John Salerno Guest

    On Tuesday, March 6, 2012 5:05:39 PM UTC-6, John Salerno wrote:
    > > Anything that allows me NOT to use REs is welcome news, so I look forward to learning about something new! :)

    >
    > I should ask though...are there alternatives already bundled with Python that I could use? Now that you mention it, I remember something called HTMLParser (or something like that) and I have no idea why I never looked into that before I messed with REs.
    >
    > Thanks.


    ::sigh:: I'm having some trouble with the new Google Groups interface. It seems to double post, and in this case didn't post at all. If it did already, I apologize. I'll try to figure out what's happening, or just switch to areal newsgroup program.

    Anyway, my question was about Beautiful Soup. I read on the doc page that BS uses a parser, which html.parser and lxml are. So I'm guessing the difference between them is that the parser is a little more "low level," whereas BS offers a higher level approach to using them? Is BS easier to write codewith, while still using the power of lxml?
    John Salerno, Mar 6, 2012
    #8
  9. John Salerno

    John Salerno Guest

    On Tuesday, March 6, 2012 5:05:39 PM UTC-6, John Salerno wrote:
    > > Anything that allows me NOT to use REs is welcome news, so I look forward to learning about something new! :)

    >
    > I should ask though...are there alternatives already bundled with Python that I could use? Now that you mention it, I remember something called HTMLParser (or something like that) and I have no idea why I never looked into that before I messed with REs.
    >
    > Thanks.


    ::sigh:: I'm having some trouble with the new Google Groups interface. It seems to double post, and in this case didn't post at all. If it did already, I apologize. I'll try to figure out what's happening, or just switch to areal newsgroup program.

    Anyway, my question was about Beautiful Soup. I read on the doc page that BS uses a parser, which html.parser and lxml are. So I'm guessing the difference between them is that the parser is a little more "low level," whereas BS offers a higher level approach to using them? Is BS easier to write codewith, while still using the power of lxml?
    John Salerno, Mar 6, 2012
    #9
  10. John Salerno

    Ian Kelly Guest

    On Tue, Mar 6, 2012 at 4:05 PM, John Salerno <> wrote:
    >> Anything that allows me NOT to use REs is welcome news, so I look forward to learning about something new! :)

    >
    > I should ask though...are there alternatives already bundled with Python that I could use? Now that you mention it, I remember something called HTMLParser (or something like that) and I have no idea why I never looked into that before I messed with REs.


    HTMLParser is pretty basic, although it may be sufficient for your
    needs. It just converts an html document into a stream of start tags,
    end tags, and text, with no guarantee that the tags will actually
    correspond in any meaningful way. lxml can be used to output an
    actual hierarchical structure that may be easier to manipulate and
    extract data from.

    Cheers,
    Ian
    Ian Kelly, Mar 6, 2012
    #10
  11. John Salerno

    John Salerno Guest

    Thanks. I'm thinking the choice might be between lxml and Beautiful
    Soup, but since BS uses lxml as a parser, I'm trying to figure out the
    difference between them. I don't necessarily need the simplest
    (html.parser), but I want to choose one that is simple enough yet
    powerful enough that I won't have to learn another method later.




    On Tue, Mar 6, 2012 at 5:35 PM, Ian Kelly <> wrote:
    > On Tue, Mar 6, 2012 at 4:05 PM, John Salerno <> wrote:
    >>> Anything that allows me NOT to use REs is welcome news, so I look forward to learning about something new! :)

    >>
    >> I should ask though...are there alternatives already bundled with Pythonthat I could use? Now that you mention it, I remember something called HTMLParser (or something like that) and I have no idea why I never looked intothat before I messed with REs.

    >
    > HTMLParser is pretty basic, although it may be sufficient for your
    > needs.  It just converts an html document into a stream of start tags,
    > end tags, and text, with no guarantee that the tags will actually
    > correspond in any meaningful way.  lxml can be used to output an
    > actual hierarchical structure that may be easier to manipulate and
    > extract data from.
    >
    > Cheers,
    > Ian
    John Salerno, Mar 6, 2012
    #11
  12. On Tue, 06 Mar 2012 15:05:39 -0800, John Salerno wrote:

    >> Anything that allows me NOT to use REs is welcome news, so I look
    >> forward to learning about something new! :)

    >
    > I should ask though...are there alternatives already bundled with Python
    > that I could use? Now that you mention it, I remember something called
    > HTMLParser (or something like that) and I have no idea why I never
    > looked into that before I messed with REs.


    import htmllib
    help(htmllib)

    The help is pretty minimal and technical, you might like to google on a
    tutorial or two:

    https://duckduckgo.com/html/?q=python htmllib tutorial

    Also, you're still double-posting.


    --
    Steven
    Steven D'Aprano, Mar 6, 2012
    #12
  13. John Salerno

    John Salerno Guest

    > Also, you're still double-posting.

    Grr. I just reported it to Google, but I think if I start to frequent the newsgroup again I'll have to switch to Thunderbird, or perhaps I'll just try switching back to the old Google Groups interface. I think the issue is the new interface.

    Sorry.
    John Salerno, Mar 6, 2012
    #13
  14. >

    > > Also, you're still double-posting.

    >
    > Grr. I just reported it to Google, but I think if I start to frequent the
    > newsgroup again I'll have to switch to Thunderbird, or perhaps I'll just
    > try switching back to the old Google Groups interface. I think the issue is
    > the new interface.
    >
    > Sorry.


    Oddly, I see no double posting for this thread on my end (email list).

    Ramit


    Ramit Prasad | JPMorgan Chase Investment Bank | Currencies Technology
    712 Main Street | Houston, TX 77002
    work phone: 713 - 216 - 5423

    --

    This email is confidential and subject to important disclaimers and
    conditions including on offers for the purchase or sale of
    securities, accuracy and completeness of information, viruses,
    confidentiality, legal privilege, and legal entity disclaimers,
    available at http://www.jpmorgan.com/pages/disclosures/email.
    Prasad, Ramit, Mar 7, 2012
    #14
  15. John Salerno

    Terry Reedy Guest

    On 3/6/2012 6:05 PM, John Salerno wrote:
    >> Anything that allows me NOT to use REs is welcome news, so I look
    >> forward to learning about something new! :)

    >
    > I should ask though...are there alternatives already bundled with
    > Python that I could use?


    lxml is +- upward compatible with xml.etree in the stdlib.

    --
    Terry Jan Reedy
    Terry Reedy, Mar 7, 2012
    #15
  16. John Salerno

    Terry Reedy Guest

    On 3/6/2012 6:57 PM, John Salerno wrote:
    >> Also, you're still double-posting.

    >
    > Grr. I just reported it to Google, but I think if I start to frequent
    > the newsgroup again I'll have to switch to Thunderbird, or perhaps
    > I'll just try switching back to the old Google Groups interface. I
    > think the issue is the new interface.


    I am not seeing the double posting, but I use Thunderbird + the
    news.gmane.org mirrors of python-list and others.

    --
    Terry Jan Reedy
    Terry Reedy, Mar 7, 2012
    #16
  17. John Salerno

    Roy Smith Guest

    In article
    <12783654.1174.1331073814011.JavaMail.geo-discussion-forums@yner4>,
    John Salerno <> wrote:

    > I sort of have to work with what the website gives me (as you'll see below),
    > but today I encountered an exception to my RE. Let me just give all the
    > specific information first. The point of my script is to go to the specified
    > URL and extract song information from it.


    Rule #1: Don't try to parse XML, HTML, or any other kind of ML with
    regular expressions.

    Rule #2: Use a dedicated ML parser. I like lxml (http://lxml.de/).
    There's other possibilities.

    Rule #3: If in doubt, see rule #1.
    Roy Smith, Mar 7, 2012
    #17
  18. John Salerno

    John Salerno Guest

    After a bit of reading, I've decided to use Beautiful Soup 4, with
    lxml as the parser. I considered simply using lxml to do all the work,
    but I just got lost in the documentation and tutorials. I couldn't
    find a clear explanation of how to parse an HTML file and then
    navigate its structure.

    The Beautiful Soup 4 documentation was very clear, and BS4 itself is
    so simple and Pythonic. And best of all, since version 4 no longer
    does the parsing itself, you can choose your own parser, and it works
    with lxml, so I'll still be using lxml, but with a nice, clean overlay
    for navigating the tree structure.

    Thanks for the advice!
    John Salerno, Mar 7, 2012
    #18
  19. John Salerno

    Paul Rubin Guest

    John Salerno <> writes:
    > The Beautiful Soup 4 documentation was very clear, and BS4 itself is
    > so simple and Pythonic. And best of all, since version 4 no longer
    > does the parsing itself, you can choose your own parser, and it works
    > with lxml, so I'll still be using lxml, but with a nice, clean overlay
    > for navigating the tree structure.


    I haven't used BS4 but have made good use of earlier versions.

    Main thing to understand is that an awful lot of HTML in the real world
    is malformed and will break an XML parser or anything that expects
    syntactically invalid HTML. People tend to write HTML that works well
    enough to render decently in browsers, whose parsers therefore have to
    be tolerant of bad errors. Beautiful Soup also tries to make sense of
    crappy, malformed, HTML. Partly as a result, it's dog slow compared to
    any serious XML parser. But it works very well if you don't mind the
    low speed.
    Paul Rubin, Mar 7, 2012
    #19
  20. John Salerno

    John Salerno Guest

    Ok, first major roadblock. I have no idea how to install Beautiful
    Soup or lxml on Windows! All I can find are .tar files. Based on what
    I've read, I can use the easy_setup module to install these types of
    files, but when I went to download the setuptools package, it only
    seemed to support Python 2.7. I'm using 3.2. Is 2.7 just the minimum
    version it requires? It didn't say something like "2.7+", so I wasn't
    sure, and I don't want to start installing a bunch of stuff that will
    clog up my directories and not even work.

    What's the best way for me to install these two packages? I've also
    seen a reference to using setup.py...is that a separate package too,
    or is that something that comes with Python by default?

    Thanks.
    John Salerno, Mar 7, 2012
    #20
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. VSK
    Replies:
    2
    Views:
    2,267
  2. vaidas gudas

    How to write Regular Expression

    vaidas gudas, Oct 4, 2004, in forum: ASP .Net
    Replies:
    1
    Views:
    457
    Alphonse Giambrone
    Oct 4, 2004
  3. =?iso-8859-1?B?bW9vcJk=?=

    Matching abitrary expression in a regular expression

    =?iso-8859-1?B?bW9vcJk=?=, Dec 1, 2005, in forum: Java
    Replies:
    8
    Views:
    829
    Alan Moore
    Dec 2, 2005
  4. GIMME
    Replies:
    3
    Views:
    11,918
    vforvikash
    Dec 29, 2008
  5. could ildg

    How to write this regular expression?

    could ildg, May 4, 2005, in forum: Python
    Replies:
    24
    Views:
    612
    Fredrik Lundh
    May 9, 2005
Loading...

Share This Page