Convert AWK regex to Python

Discussion in 'Python' started by J, May 16, 2011.

  1. J

    J Guest

    Good morning all,
    Wondering if you could please help me with the following query:-
    I have just started learning Python last weekend after a colleague of mine showed me how to dramatically cut the time a Bash script takes to execute by re-writing it in Python. I was amazed at how fast it ran. I would now like to do the same thing with another script I have.

    This other script reads a log file and using AWK it filters certain fields from the log and writes them to a new file. See below the regex the scriptis executing. I would like to re-write this regex in Python as my script is currently taking about 1 hour to execute on a log file with about 100,000 lines. I would like to cut this time down as much as possible.

    cat logs/pdu_log_fe.log | awk -F\- '{print $1,$NF}' | awk -F\. '{print $1,$NF}' | awk '{print $1,$4,$5}' | sort | uniq | while read service command status; do echo "Service: $service, Command: $command, Status: $status, Occurrences: `grep $service logs/pdu_log_fe.log | grep $command | grep $status |wc -l | awk '{ print $1 }'`" >> logs/pdu_log_fe_clean.log; done

    This AWK command gets lines which look like this:-

    2011-05-16 09:46:22,361 [Thread-4847133] PDU D <G_CC_SMS_SERVICE_51408_656.O_ CC_SMS_SERVICE_51408_656-ServerThread-VASPSessionThread-7ee35fb0-7e87-11e0-a2da-00238bce423b-TRX - 2011-05-16 09:46:22 - OUT - (submit_resp: (pdu: L: 53 ID: 80000004 Status: 0 SN: 25866) 98053090-7f90-11e0-a2da-00238bce423b (opt: ) ) >

    And outputs lines like this:-

    CC_SMS_SERVICE_51408 submit_resp: 0

    I have tried writing the Python script myself but I am getting stuck writing the regex. So far I have the following:-

    #!/usr/bin/python

    # Import RegEx module
    import re as regex
    # Log file to work on
    filetoread = open('/tmp/ pdu_log.log', "r")
    # File to write output to
    filetowrite = file('/tmp/ pdu_log_clean.log', "w")
    # Perform filtering in the log file
    linetoread = filetoread.readlines()
    for line in linetoread:
    filter0 = regex.sub(r"<G_","",line)
    filter1 = regex.sub(r"\."," ",filter0)
    # Write new log file
    filetowrite.write(filter1)
    filetowrite.close()
    # Read new log and get required fields from it
    filtered_log = open('/tmp/ pdu_log_clean.log', "r")
    filtered_line = filtered_log.readlines()
    for line in filtered_line:
    token = line.split(" ")
    print token[0], token[1], token[5], token[13], token[20]
    print "Done"

    Ugly I know but please bear in mind that I have just started learning Python two days ago.

    I have been looking on this group and on the Internet for snippets of code that I could use but so far what I have found do not fit my needs or are too complicated (at least for me).

    Any suggestion, advice you can give me on how to accomplish this task will be greatly appreciated.

    On another note, can you also recommend a good no-nonsense book to learn Python? I have read the book “A Byte of Python” by Swaroop C H (great introductory book!) and I am now reading “Dive into Python” by Mark Pilgrim. I am looking for a book that explains things in simple terms and goes straight to the point (similar to how “A Byte of Python” was written)

    Thanks in advance

    Kind regards,

    Junior
     
    J, May 16, 2011
    #1
    1. Advertising

  2. On Mon, May 16, 2011 at 6:19 PM, J <> wrote:
    > cat logs/pdu_log_fe.log | awk -F\- '{print $1,$NF}' | awk -F\. '{print $1,$NF}' | awk '{print $1,$4,$5}' | sort | uniq | while read service command status; do echo "Service: $service, Command: $command, Status: $status, Occurrences: `grep $service logs/pdu_log_fe.log | grep $command | grep $status| wc -l | awk '{ print $1 }'`" >> logs/pdu_log_fe_clean.log; done


    Small side point: Instead of "| sort | uniq |", you could use a Python
    dictionary. That'll likely speed things up somewhat!

    Chris Angelico
     
    Chris Angelico, May 16, 2011
    #2
    1. Advertising

  3. On Mon, May 16, 2011 at 6:43 PM, J <> wrote:
    > Good morning Angelico,
    > Do I understand correctly? Do you mean incorporating a Python dict inside the AWK command? How can I do this?


    No, inside Python. What I mean is that you can achieve the same
    uniqueness requirement by simply storing the intermediate data in a
    dictionary and then retrieving it at the end.

    Chris Angelico
     
    Chris Angelico, May 16, 2011
    #3
  4. J

    Peter Otten Guest

    J wrote:

    > Good morning all,
    > Wondering if you could please help me with the following query:-
    > I have just started learning Python last weekend after a colleague of mine
    > showed me how to dramatically cut the time a Bash script takes to execute
    > by re-writing it in Python. I was amazed at how fast it ran. I would now
    > like to do the same thing with another script I have.
    >
    > This other script reads a log file and using AWK it filters certain fields
    > from the log and writes them to a new file. See below the regex the
    > script is executing. I would like to re-write this regex in Python as my
    > script is currently taking about 1 hour to execute on a log file with
    > about 100,000 lines. I would like to cut this time down as much as
    > possible.
    >
    > cat logs/pdu_log_fe.log | awk -F\- '{print $1,$NF}' | awk -F\. '{print
    > $1,$NF}' | awk '{print $1,$4,$5}' | sort | uniq | while read service
    > command status; do echo "Service: $service, Command: $command, Status:
    > $status, Occurrences: `grep $service logs/pdu_log_fe.log | grep $command |
    > grep $status | wc -l | awk '{ print $1 }'`" >> logs/pdu_log_fe_clean.log;
    > done
    >
    > This AWK command gets lines which look like this:-
    >
    > 2011-05-16 09:46:22,361 [Thread-4847133] PDU D
    > <G_CC_SMS_SERVICE_51408_656.O_
    > CC_SMS_SERVICE_51408_656-ServerThread-

    VASPSessionThread-7ee35fb0-7e87-11e0-a2da-00238bce423b-TRX
    > - 2011-05-16 09:46:22 - OUT - (submit_resp: (pdu: L: 53 ID: 80000004
    > Status: 0 SN: 25866) 98053090-7f90-11e0-a2da-00238bce423b (opt: ) ) >
    >
    > And outputs lines like this:-
    >
    > CC_SMS_SERVICE_51408 submit_resp: 0
    >
    > I have tried writing the Python script myself but I am getting stuck
    > writing the regex. So far I have the following:-


    For the moment forget about the implementation. The first thing you should
    do is to describe the problem as clearly as possible, in plain English.
     
    Peter Otten, May 16, 2011
    #4
  5. J <> writes:

    > cat logs/pdu_log_fe.log | awk -F\- '{print $1,$NF}' | awk -F\. '{print $1,$NF}' | awk '{print $1,$4,$5}' | sort | uniq | while read service command status; do echo "Service: $service, Command: $command, Status: $status, Occurrences: `grep $service logs/pdu_log_fe.log | grep $command | grep $status | wc -l | awk '{ print $1 }'`" >> logs/pdu_log_fe_clean.log; done
    >
    > This AWK command gets lines which look like this:-
    >
    > 2011-05-16 09:46:22,361 [Thread-4847133] PDU D <G_CC_SMS_SERVICE_51408_656.O_ CC_SMS_SERVICE_51408_656-ServerThread-VASPSessionThread-7ee35fb0-7e87-11e0-a2da-00238bce423b-TRX - 2011-05-16 09:46:22 - OUT - (submit_resp: (pdu: L: 53 ID: 80000004 Status: 0 SN: 25866) 98053090-7f90-11e0-a2da-00238bce423b (opt: ) ) >
    >
    > And outputs lines like this:-
    >
    > CC_SMS_SERVICE_51408 submit_resp: 0
    >


    i see some discrepancies in the description of your problem

    1. if i echo a properly quoted line "like this" above in the pipeline
    formed by the first three awk commands i get

    $ echo $likethis | awk -F\- '{print $1,$NF}' \
    | awk -F\. '{print$1,$NF}' \
    | awk '{print $1,$4,$5}'
    2011 ) )
    $
    not a triple 'service command status'

    2. with regard to the final product, you script outputs lines like in

    echo "Service: $service, [...]"

    and you say that it produces lines like

    CC_SMS_SERVICE_51408 submit_resp:


    WHATEVER, the abnormous run time is due to the fact that for every
    output line you rescan again and again the whole log file

    IF i had understood what you want, imho you should run your data
    through sort and uniq -c

    $ awk -F\- '{print $1,$NF}' < $file \
    | awk -F\. '{print$1,$NF}' \
    | awk '{print $1,$4,$5}' | sort | uniq -c | format_program

    uniq -c drops repeated lines from a sorted input AND prepends to each
    line the count of equal lines in the original stream

    hth
    g
     
    Giacomo Boffi, May 16, 2011
    #5
  6. J

    Matt Berends Guest

    Matt Berends, May 16, 2011
    #6
  7. J

    MRAB Guest

    On 16/05/2011 09:19, J wrote:
    [snip]
    > #!/usr/bin/python
    >
    > # Import RegEx module
    > import re as regex
    > # Log file to work on
    > filetoread = open('/tmp/ pdu_log.log', "r")
    > # File to write output to
    > filetowrite = file('/tmp/ pdu_log_clean.log', "w")
    > # Perform filtering in the log file
    > linetoread = filetoread.readlines()
    > for line in linetoread:
    > filter0 = regex.sub(r"<G_","",line)
    > filter1 = regex.sub(r"\."," ",filter0)
    > # Write new log file
    > filetowrite.write(filter1)
    > filetowrite.close()
    > # Read new log and get required fields from it
    > filtered_log = open('/tmp/ pdu_log_clean.log', "r")
    > filtered_line = filtered_log.readlines()
    > for line in filtered_line:
    > token = line.split(" ")
    > print token[0], token[1], token[5], token[13], token[20]
    > print "Done"
    >

    [snip]

    If you don't need the power of regex, it's faster to use string methods:

    filter0 = line.replace("<G_", "")
    filter1 = filter0.replace(".", " ")

    Actually, seeing as how you're reading all the lines in one go anyway,
    it's probably faster to do this instead:

    text = filetoread.read()
    text = text.replace("<G_", "")
    text = text.replace(".", " ")
    # Write new log file
    filetowrite.write(text)
     
    MRAB, May 16, 2011
    #7
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Dan Jacobson
    Replies:
    2
    Views:
    462
    bthoren
    Jul 28, 2003
  2. Matthew Thorley

    python vs awk for simple sysamin tasks

    Matthew Thorley, Jun 3, 2004, in forum: Python
    Replies:
    20
    Views:
    1,622
    Donald 'Paddy' McCarthy
    Jun 5, 2004
  3. Daniel Nogradi

    text file parsing (awk -> python)

    Daniel Nogradi, Nov 22, 2006, in forum: Python
    Replies:
    3
    Views:
    606
  4. Replies:
    3
    Views:
    794
    Reedick, Andrew
    Jul 1, 2008
  5. J
    Replies:
    3
    Views:
    413
Loading...

Share This Page