C
Chris Hamel
This may not be a Perl issue per se, but our Unix support people don't
have any insight and I was hoping to get some direction.
We have Perl (5.8.0) installed on an AIX server. The server itself is
capable of handling 32-bit processes as large as about 2 GB, and we've
never found a ceiling for the 64-bit processes (only 32 GB on the
server, and we've run programs as large as 22 GB). When I run programs
in Perl, however, they core dump once the process hits about 250 MB.
In many cases, we can get around this by using the BerkeleyDB module,
but this results in a significant performance hit and requires some
data structure reengineering.
Unfortunately, I am fairly Unix ignorant. I did not install Perl on
Unix, nor would I know how to. I just use what the Unix admins
installed for us. What I'm trying to find out is if there is a runtime
option or an option on installation (compilation?) that enables Perl to
have a higher threshhold than 250 MB.... or is this a limitation built
into Perl?
Any information or feedback I can pass on to our Unix admins would be
most appreciated.
Also, for what it's worth, our programs really are that large. We've
done a number of things to try to reduce the footprint of the programs
(other than what's in perldoc -q memory). One example I learned is
that that doing this:
$part_info{$part} = [ $nomenclature, $cost, $min_qty, $max_qty ];
takes up more memory than this:
$part_info{$part} = join '|', $nomenclature, $cost, $min_qty,
$max_qty;
(not to mention not working with Berkeley). But the bottom line is the
data we bring together is huge.
Thanks in advance for any insight or direction, and I apologize in
advance if this has more to do with the OS than with Perl...
Chris H.
have any insight and I was hoping to get some direction.
We have Perl (5.8.0) installed on an AIX server. The server itself is
capable of handling 32-bit processes as large as about 2 GB, and we've
never found a ceiling for the 64-bit processes (only 32 GB on the
server, and we've run programs as large as 22 GB). When I run programs
in Perl, however, they core dump once the process hits about 250 MB.
In many cases, we can get around this by using the BerkeleyDB module,
but this results in a significant performance hit and requires some
data structure reengineering.
Unfortunately, I am fairly Unix ignorant. I did not install Perl on
Unix, nor would I know how to. I just use what the Unix admins
installed for us. What I'm trying to find out is if there is a runtime
option or an option on installation (compilation?) that enables Perl to
have a higher threshhold than 250 MB.... or is this a limitation built
into Perl?
Any information or feedback I can pass on to our Unix admins would be
most appreciated.
Also, for what it's worth, our programs really are that large. We've
done a number of things to try to reduce the footprint of the programs
(other than what's in perldoc -q memory). One example I learned is
that that doing this:
$part_info{$part} = [ $nomenclature, $cost, $min_qty, $max_qty ];
takes up more memory than this:
$part_info{$part} = join '|', $nomenclature, $cost, $min_qty,
$max_qty;
(not to mention not working with Berkeley). But the bottom line is the
data we bring together is huge.
Thanks in advance for any insight or direction, and I apologize in
advance if this has more to do with the OS than with Perl...
Chris H.