R
Russ
I have a very simple script to read an input file, skip everything
between the strings "BEGIN" and "END", and store the results in an
output
file.
----------------------------
3 open(IN,"input.txt");
4 open(OUT,">output.txt");
5
6 while(<IN>) {
7 chomp;
8 if (/BEGIN/) {
9 until (/END/) {
10 $_ = <IN>;
11 }
12 } else {
13 print OUT "$_\n";
14 }
15 }
16
17 close IN;
18 close OUT;
----------------------------------
When I run this on a small file, about 28 kb, it works fine. When I
try it on a large file, about 3 gigs , I get the error:
"Read on closed filehandle <IN> at trim.pl line 6."
The file structure is essentailly the same, the only difference is the
size.
I bump into limits in many text editors when I try to access the large
file, and
I assume that this is a similar limitation in PERL. Is there any way
around this?
I also wouldn't mind some constructive criticism on the script
itself. It seems like there should
be a more elegant way to accomplish this task which might avoid this
problem.
Any suggestions?
Thanks,
Russ
between the strings "BEGIN" and "END", and store the results in an
output
file.
----------------------------
3 open(IN,"input.txt");
4 open(OUT,">output.txt");
5
6 while(<IN>) {
7 chomp;
8 if (/BEGIN/) {
9 until (/END/) {
10 $_ = <IN>;
11 }
12 } else {
13 print OUT "$_\n";
14 }
15 }
16
17 close IN;
18 close OUT;
----------------------------------
When I run this on a small file, about 28 kb, it works fine. When I
try it on a large file, about 3 gigs , I get the error:
"Read on closed filehandle <IN> at trim.pl line 6."
The file structure is essentailly the same, the only difference is the
size.
I bump into limits in many text editors when I try to access the large
file, and
I assume that this is a similar limitation in PERL. Is there any way
around this?
I also wouldn't mind some constructive criticism on the script
itself. It seems like there should
be a more elegant way to accomplish this task which might avoid this
problem.
Any suggestions?
Thanks,
Russ