P
Prabh
Hello all,
I need to parse a log file and generate a formatted output.
I do have a solution in PERL, but now need to transform it to Java.
Could anyone please direct how do I go about it.
I have a log file of following format, which contains info. on a
series of files after a process.
========================================================
File1: Info. on File1
File2: Info. on File2
File1: Info. on File1
File3: Info. on File3
File1: Info. on File1
and so on...
========================================================
I want to display the output as...
============================
n1 lines of info on File1
n2 lines of info on File2
n3 lines of info on File3
============================
Is this the way to do it in Java:
1) Process log file one line at a time.
2) First find all files mentioned in the log, store the files list in
an array(?).
3) Just consider the unique files in this array.
4) Loop through this unique files list and for each file, find
relevant matches in the original log. Keep track of the count or print
count.
Questions:
1) How does one find "unique" elements in an array in Java?
2) Whats the diff. b/w an array, vector, HashMap etc.?
Is there any thumb rule about when should one use which collection?
Why cant one have an array or hash (as in PERL) and leave the rest
to the user to make whatever of it, why these gazillion choices?
3) I understand my soultion is more shell-scripting or Perl-ish way of
doing things, what would be the Java way to do this?
Thanks,
Prab
===============================================================================
For what its worth, this'd be my solution in PERL.
#!/usr/local/bin/perl
#============================
#=====================
# Log file is Foo.txt
#---------------------
open(FDL,"Foo.txt") ;
chomp(@arr = <FDL> ) ;
close(FDL) ;
#=================================
# First, get the files in the log
#---------------------------------
undef @files ;
foreach $line ( @arr ) {
push(@files,(split(/\:/,$line))[0]) ;
}
#==========================================
# Sort the files, find the uniq files
# Foreach such file, grep the original log
# for all occurences and count.
#------------------------------------------
foreach $file ( &uniq(sort @files ) ) {
undef @info ;
$info = grep {/^$file\:/} @arr ;
printf "$info lines of info on $file\n";
}
#=============================
# subroutine to do Unixy-uniq
#-----------------------------
sub uniq {
@uniq = @_ ;
#=======================================================
# Foreach array element , compare with its predecessor.
# If yes, its already present and splice it from array.
#=======================================================
for ( $i = 1; $i < @uniq ; $i++ ) {
if ( @uniq[$i] eq @uniq[$i-1] ) {
splice( @uniq,$i-1,1 ) ;
$i--;
}
}
return @uniq ;
}
I need to parse a log file and generate a formatted output.
I do have a solution in PERL, but now need to transform it to Java.
Could anyone please direct how do I go about it.
I have a log file of following format, which contains info. on a
series of files after a process.
========================================================
File1: Info. on File1
File2: Info. on File2
File1: Info. on File1
File3: Info. on File3
File1: Info. on File1
and so on...
========================================================
I want to display the output as...
============================
n1 lines of info on File1
n2 lines of info on File2
n3 lines of info on File3
============================
Is this the way to do it in Java:
1) Process log file one line at a time.
2) First find all files mentioned in the log, store the files list in
an array(?).
3) Just consider the unique files in this array.
4) Loop through this unique files list and for each file, find
relevant matches in the original log. Keep track of the count or print
count.
Questions:
1) How does one find "unique" elements in an array in Java?
2) Whats the diff. b/w an array, vector, HashMap etc.?
Is there any thumb rule about when should one use which collection?
Why cant one have an array or hash (as in PERL) and leave the rest
to the user to make whatever of it, why these gazillion choices?
3) I understand my soultion is more shell-scripting or Perl-ish way of
doing things, what would be the Java way to do this?
Thanks,
Prab
===============================================================================
For what its worth, this'd be my solution in PERL.
#!/usr/local/bin/perl
#============================
#=====================
# Log file is Foo.txt
#---------------------
open(FDL,"Foo.txt") ;
chomp(@arr = <FDL> ) ;
close(FDL) ;
#=================================
# First, get the files in the log
#---------------------------------
undef @files ;
foreach $line ( @arr ) {
push(@files,(split(/\:/,$line))[0]) ;
}
#==========================================
# Sort the files, find the uniq files
# Foreach such file, grep the original log
# for all occurences and count.
#------------------------------------------
foreach $file ( &uniq(sort @files ) ) {
undef @info ;
$info = grep {/^$file\:/} @arr ;
printf "$info lines of info on $file\n";
}
#=============================
# subroutine to do Unixy-uniq
#-----------------------------
sub uniq {
@uniq = @_ ;
#=======================================================
# Foreach array element , compare with its predecessor.
# If yes, its already present and splice it from array.
#=======================================================
for ( $i = 1; $i < @uniq ; $i++ ) {
if ( @uniq[$i] eq @uniq[$i-1] ) {
splice( @uniq,$i-1,1 ) ;
$i--;
}
}
return @uniq ;
}