Actually, that gives you handle onto the compressed data stream, not the
uncompressed one. I'm sure that there is a way to this with the tools
listed under "Low-level member data reading", but at this point I'd just
punt and extract it to a temp file then re-read it from that.
Thank you.
However a temp file would defeat the purpose.
This is what I'm doing with the $ZippedFile:
use Archive::Zip;
my $Zip = Archive::Zip->new("$Folder/$ZippedFile");
my ($Dictionary) = $Zip->members();
--and the result is exactly the same
as if I'd done this with the $UnzippedFile:
open F, "$Folder/$UnzippedFile";
my $Dictionary = join "\n", (<F>);
close F;
EXCEPT that the loading is about 4 times faster
from the $ZippedFile, --even though it has to be
expanded in-memory, -- than from the unzipped file.
But what I don't know is
1) does every call to $Dictionary->contents()
go through the inflation process again?
And 2) - what is a very general perl question:
Since I will be running the loop many many times,
can I just leave it like this?
foreach my $line (split "\n", $String)
or should I do this?
my @String = split "\n", $String;
foreach my $line (@String)
The reason I'd want to leave it the first way
is that $String is very larger, and @String
would be even larger.
I would undef $String, but still, for a moment,
anyway the 2nd way just might be using twice
as much memory as is really necessary.
But I don't know, because IF the first way
does essentially the same thing that the 2nd
way does, but with a temorary @String,
--and IF it does it EVERY single time it's called,
then that obviously that would be wrong way
to go!
I hope that's clearer
~greg