TOO SLOW: `echo ${LINE} | sed -n 's/^.* proto //p' | cut -f1 -d" "`

S

secheese

I have a script that monitors a firewall drop log file and I need to
pull the protocol fields. I used to know exactly where this field
was, so I could easily get the field with this statement:

PROTOCOL=`awk 'print $5'`

But now the logs are dynamic and the field can be anywhere. One thing
I do know is that the protocol field always follows a field labelled
"proto". Thus the follow command gets it for me:

PROTOCOL=`echo ${LINE} | sed -n 's/^.* proto //p' | cut -f1 -d" "`

Trouble is, this command takes about 10 times as long to run as the
awk did. The result is that the execution time for my script overall
has gone from about 1 minute to 10 minutes.

Can anyone think of a faster way to get the job done? BTW, perl is
available, but I'm unfamiliar with the language.

Thanks.
 
B

Ben

secheese said:
PROTOCOL=`echo ${LINE} | sed -n 's/^.* proto //p' | cut -f1 -d" "`

Trouble is, this command takes about 10 times as long to run as the

From a POSIX type shell:
PROTOCOL=${LINE#* proto } # chop everything up to 'proto'
PROTOCOL=${PROTOCOL%% *} # chop everything down to first field

regards,
Ben
 
R

rakesh sharma

I have a script that monitors a firewall drop log file and I need to
pull the protocol fields. I used to know exactly where this field
was, so I could easily get the field with this statement:

PROTOCOL=`awk 'print $5'`

But now the logs are dynamic and the field can be anywhere. One thing
I do know is that the protocol field always follows a field labelled
"proto". Thus the follow command gets it for me:

PROTOCOL=`echo ${LINE} | sed -n 's/^.* proto //p' | cut -f1 -d" "`

Trouble is, this command takes about 10 times as long to run as the
awk did. The result is that the execution time for my script overall
has gone from about 1 minute to 10 minutes.

Can anyone think of a faster way to get the job done? BTW, perl is
available, but I'm unfamiliar with the language.

I don't think the slowdown is because of sed, it's probably due to the $LINE
variable(I guess u r using the shell to loop thru the file).

what u can do is:

sed -ne '/ proto /s/^.* proto *\([^ ]*\).*$/\1/p' firewall_logfile

or with perl:

perl -wlane '/\sproto\s+\S/&&do{
shift @F until $F[0] eq "proto";
print $F[1];
}
' firewall_logfile
 
C

Carlton Brown

I have a script that monitors a firewall drop log file and I need to
pull the protocol fields. I used to know exactly where this field
was, so I could easily get the field with this statement:

PROTOCOL=`awk 'print $5'`

But now the logs are dynamic and the field can be anywhere. One thing
I do know is that the protocol field always follows a field labelled
"proto". Thus the follow command gets it for me:

PROTOCOL=`echo ${LINE} | sed -n 's/^.* proto //p' | cut -f1 -d" "`

while (<INPUTFILE>) {
@pname = $_ =~ /proto\s+(\w+)/;
print "I found a protocol named: $pname[0]\n";
}

This works if you've correctly specified your file handles. That,
along with any customizations that you may consider asking about next,
are intentionally left blank as an opportunity for self-study.
 
J

Juergen Heck

secheese said:
I have a script that monitors a firewall drop log file and I need to
pull the protocol fields. I used to know exactly where this field
was, so I could easily get the field with this statement:

PROTOCOL=`awk 'print $5'`

But now the logs are dynamic and the field can be anywhere. One thing
I do know is that the protocol field always follows a field labelled
"proto". Thus the follow command gets it for me:

PROTOCOL=`echo ${LINE} | sed -n 's/^.* proto //p' | cut -f1 -d" "`

Trouble is, this command takes about 10 times as long to run as the
awk did. The result is that the execution time for my script overall
has gone from about 1 minute to 10 minutes.

Can anyone think of a faster way to get the job done? BTW, perl is
available, but I'm unfamiliar with the language.

Thanks.

PROTOCOL=`awk '{sub(/.* proto /,"") ; print $1' yourlogfile`
for entry in $PROTOCOL
do
### process $entry
done

or

awk '{sub(/.* proto /,"") ; print $1' yourlogfile | while read entry
do
###process $entry
done


Regards
Juergen
 
S

secheese

Your suggestion worked beautifully! The speed is incredible now;
guess using built-in shell code is always the better choice than
calling external commands. I never even considered these constructs;
to be honest I had forgotten they existed.

Thanks.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,744
Messages
2,569,483
Members
44,902
Latest member
Elena68X5

Latest Threads

Top