Counting occurences using a variable

J

jesse

I have a master file that gets added to everyday, with backup
failures. The file has server name and what failed. I want to open
the file and read the first line then check the rest of the file for
any more occurrences of that exact failure. Then I want to move to
the second line and repeat the proccess until I have read through the
entire file. When I move down the file I want to ignore lines that
have already been counted. I would also like to be able to ignore
lines if the failure didn't occur the day I am checking for, ie if it
failed today but didn't fail tomorrow then I don't want it to show up
on my report(this is a like to have but it's not mandatory). The file
looks like this;

server2 c:\\
server1 d:\\
server5 e:\\
server1 d:\\

This is the meat of the script that I have that of course is not
working right;

while (<FAILED>){
chomp;
s/#.*//;
next if /^(\s)*$/;
push @failures, $_;
foreach $elem ( @failures ){
next if $seen{$elem}++;
print OUTPUT "$elem has failed $seen{$elem}
times\n";
}

}
 
U

usenet

The file looks like this;

So what's the problem with your code? You don't say, but I assume the
problem is that the failure count is wrong (always shows 1). That's
because you increment $seen{$elem} but next() out if the value is
defined (so it only will ever print one message).

However, the value of $seen{$elem} is being artificially inflated
because you're looping over @failures (and incrementing the count)
each time a new failure is found.

If you want to correctly report the total number of failures then you
must analyze all the data and THEN print the results. You are
printing on-the-fly, which will only report the first occurance of the
failure.
 
A

anno4000

jesse said:
I have a master file that gets added to everyday, with backup
failures. The file has server name and what failed. I want to open
the file and read the first line then check the rest of the file for
any more occurrences of that exact failure. Then I want to move to
the second line and repeat the proccess until I have read through the
entire file. When I move down the file I want to ignore lines that
have already been counted. I would also like to be able to ignore
lines if the failure didn't occur the day I am checking for, ie if it
failed today but didn't fail tomorrow then I don't want it to show up
on my report(this is a like to have but it's not mandatory). The file
looks like this;

server2 c:\\
server1 d:\\
server5 e:\\
server1 d:\\

This is the meat of the script that I have that of course is not
working right;

while (<FAILED>){
chomp;
s/#.*//;
next if /^(\s)*$/;
push @failures, $_;
foreach $elem ( @failures ){
next if $seen{$elem}++;
print OUTPUT "$elem has failed $seen{$elem}
times\n";
}

}

That's a job for a hash. Read the file, accumulating the counts. Then
print a report:

my %count;
while ( <DATA> ) {
chomp;
++ $count{ $_};
}

printf(
"%s has failed %d time%s\n",
$_,
$count{ $_},
$count{ $_} == 1 ? '' : 's', # grammar matters!
) for sort keys %count;

__DATA__
server2 c:\\
server1 d:\\
server5 e:\\
server1 d:\\

Anno
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,767
Messages
2,569,572
Members
45,045
Latest member
DRCM

Latest Threads

Top