M
Mike Deskevich
i have a quick (hopefully) question for the perl gurus out there. i
have a bunch of data files that i need to read in and do some
processing. the data files are simple two columns of (floating point)
numbers, but the size of the file can range from 1000 to 10,000 lines.
i need to save the data in an array for post processing, so i can't
just read a line and throw the data away. my main question is: is
there a faster way to read the data than how i'm currently doing it
(i'm a c programmer, so i'm sure that i'm not using perl as
efficiently as i can)
here's how i read my data files
$ct=0;
while (<DATAFILE>)
{
($xvalue[$ct],$yvalue[$ct])=split;
$ct++;
}
#do stuff with xvalue and yvalue
is there a more efficient way to read in two columns of numbers? it
turns out that i have a series of these data files to process and i
think that most of the time is being used in either perl start up
time, or data reading time, the post processing is happening pretty
fast (i think)
thanks,
mike
have a bunch of data files that i need to read in and do some
processing. the data files are simple two columns of (floating point)
numbers, but the size of the file can range from 1000 to 10,000 lines.
i need to save the data in an array for post processing, so i can't
just read a line and throw the data away. my main question is: is
there a faster way to read the data than how i'm currently doing it
(i'm a c programmer, so i'm sure that i'm not using perl as
efficiently as i can)
here's how i read my data files
$ct=0;
while (<DATAFILE>)
{
($xvalue[$ct],$yvalue[$ct])=split;
$ct++;
}
#do stuff with xvalue and yvalue
is there a more efficient way to read in two columns of numbers? it
turns out that i have a series of these data files to process and i
think that most of the time is being used in either perl start up
time, or data reading time, the post processing is happening pretty
fast (i think)
thanks,
mike