S
Stu
I am reading through a file that is 2,432 lines with a record length of
450 bytes. I want to get the first 8 bytes from each line and stick in
into an array. When I excute the following piece of code it takes 7 to
8 seconds.
In comparison, when I issue a cut -c1-8 < data this from MKS this
takes one second on the same data
Does anybody know of a way on how these lines of code can be optimized.
BTW, I am using Active Perl on a Windows 2003 platform, but that should
not make a difference
print scalar ( localtime() ) . "\n";
while ($NextLine = <INPUT>)
{
@docidarray = (@docidarray, substr($NextLine, 0, 8));
}
print scalar ( localtime() ) . "\n";
450 bytes. I want to get the first 8 bytes from each line and stick in
into an array. When I excute the following piece of code it takes 7 to
8 seconds.
In comparison, when I issue a cut -c1-8 < data this from MKS this
takes one second on the same data
Does anybody know of a way on how these lines of code can be optimized.
BTW, I am using Active Perl on a Windows 2003 platform, but that should
not make a difference
print scalar ( localtime() ) . "\n";
while ($NextLine = <INPUT>)
{
@docidarray = (@docidarray, substr($NextLine, 0, 8));
}
print scalar ( localtime() ) . "\n";