perl is keeping my memory

P

peter pilsl

I've a perl-daemon that should run for weeks and rarely serves requests.
When it serves a request, then it needs huge amount of memory for a
short time.

My problem is that perl is not giving back this memory to the OS/other
processes/its own relatives when it finished its work and is just
waiting for the next request. Even not if the whole system is completely
running out of memory and the machine is going dead-swaping-mode.


A short example (terminal-based single-thread-deamon) is illustrating my
problem:

-------------------memd_test.pl--------
#!/usr/bin/perl -w

use strict;
use warnings;
$|=1;

sub Run {
my @x;
foreach (0..200000) {
push(@x,[33,44,55,66,77,88,99,22,11,33,44,55,66,77,88,99,22,11,33,44,55,66,77,88,99,22,11]);
}
}

my $c;
while (<STDIN>) {
print $c++;
Run();
}
------------------------


each CR on stdin will run the short subroutine that does nothing but
claiming lot of memory and ... unfortunately ... not releasing it to the
OS after.

If you start this example-programm a few times and feed it with 10 CR
then you end up with:

#top
2461 root 9 0 122m 122m 1228 T 0.0 6.0 0:02.78 0 121m
memd_test2.pl
2492 root 9 0 122m 122m 1228 T 0.0 6.0 0:02.77 0 121m
memd_test2.pl
2564 root 9 0 122m 122m 1228 T 0.0 6.0 0:00.98 0 121m
memd_test2.pl
2570 root 9 0 122m 122m 1228 T 0.0 6.0 0:01.87 0 121m
memd_test2.pl


note that my programm is not memleaking here, cause it will reuse the
same memory internally again. It simply does not give it back to the OS.

I have some experience in programming daemons and working with mod_perl
so I learned a bit about memory-stuff and I know that its a tricky topic
and things are not as easy at it seems. I always found a solution but
this time its different. I cant get the stuff under control.

One single request that needs 200m for about 2 seconds will take this
200m away from the system for the rest of its life.

I need to implement the process as daemon, cause the startup-time of the
process is very expensive and it might happen that multiple requests
will come in at the same time.


I know that this maybe is not perl's fault, but has lot to do with the
OS, but ... what can I do?

I encounter this struggle with perl 5.8.5 on linux 2.4 and 5.8.7 on
linux 2.6


thnx,
peter

# perl -VV
Summary of my perl5 (revision 5 version 8 subversion 5) configuration:
Platform:
osname=linux, osvers=2.4.24, archname=i686-linux
uname='linux goldfisch.at 2.4.24 #9 wed mar 10 22:29:04 cet 2004
i686 unknown '
config_args='-de'
hint=recommended, useposix=true, d_sigaction=define
usethreads=undef use5005threads=undef useithreads=undef
usemultiplicity=undef
useperlio=define d_sfio=undef uselargefiles=define usesocks=undef
use64bitint=undef use64bitall=undef uselongdouble=undef
usemymalloc=n, bincompat5005=undef
Compiler:
cc='cc', ccflags ='-fno-strict-aliasing -pipe -I/usr/local/include
-D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64',
optimize='-O2',
cppflags='-fno-strict-aliasing -pipe -I/usr/local/include'
ccversion='', gccversion='2.96 20000731 (Mandrake Linux 8.1
2.96-0.62mdk)', gccosandvers=''
intsize=4, longsize=4, ptrsize=4, doublesize=8, byteorder=1234
d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=12
ivtype='long', ivsize=4, nvtype='double', nvsize=8, Off_t='off_t',
lseeksize=8
alignbytes=4, prototype=define
Linker and Libraries:
ld='cc', ldflags =' -L/usr/local/lib'
libpth=/usr/local/lib /lib /usr/lib
libs=-lnsl -lndbm -lgdbm -ldl -lm -lcrypt -lutil -lc
perllibs=-lnsl -ldl -lm -lcrypt -lutil -lc
libc=/lib/libc-2.2.4.so, so=so, useshrplib=false, libperl=libperl.a
gnulibc_version='2.2.4'
Dynamic Linking:
dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=undef, ccdlflags='-Wl,-E'
cccdlflags='-fpic', lddlflags='-shared -L/usr/local/lib'


Characteristics of this binary (from libperl):
Compile-time options: USE_LARGE_FILES
Built under linux
Compiled at Sep 21 2004 11:47:55
@INC:
/usr/local/lib/perl5/5.8.5/i686-linux
/usr/local/lib/perl5/5.8.5
/usr/local/lib/perl5/site_perl/5.8.5/i686-linux
/usr/local/lib/perl5/site_perl/5.8.5
/usr/local/lib/perl5/site_perl/5.8.0/i686-linux
/usr/local/lib/perl5/site_perl/5.8.0
/usr/local/lib/perl5/site_perl
 
B

Bart Lateur

peter said:
I've a perl-daemon that should run for weeks and rarely serves requests.
When it serves a request, then it needs huge amount of memory for a
short time.

My problem is that perl is not giving back this memory to the OS/other
processes/its own relatives when it finished its work and is just
waiting for the next request. Even not if the whole system is completely
running out of memory and the machine is going dead-swaping-mode.

Yes, it's typical that Perl doesn't return memory to the OS until it
finishes.

The solution? Run the active, memory consuming part of the daemon in a
separate process. fork(). When that child finishes, it'll return all the
memory it consumed to the OS.
 
B

Bart Van der Donck

Bart said:
peter said:
I've a perl-daemon that should run for weeks and rarely serves requests.
When it serves a request, then it needs huge amount of memory for a
short time.

[...]

Yes, it's typical that Perl doesn't return memory to the OS until it
finishes.

The solution? Run the active, memory consuming part of the daemon in a
separate process. fork(). When that child finishes, it'll return all the
memory it consumed to the OS.

I also have good experience with inserting

sleep 5;

commands here and there in the code. E.g. after those lines where you
suspect it will need a lot of memory. 5 is the number of seconds and
can be set as long/short as you like.

Additionally, one could start the program with

nice perl scriptname.pl

to run it in CPU-friendly mode. See the docs of 'nice' for more info.

Hope this helps,
 
X

xhoster

peter pilsl said:
I've a perl-daemon that should run for weeks and rarely serves requests.
When it serves a request, then it needs huge amount of memory for a
short time.

My problem is that perl is not giving back this memory to the OS/other
processes/its own relatives when it finished its work and is just
waiting for the next request. Even not if the whole system is completely
running out of memory and the machine is going dead-swaping-mode.

If the process is just holding the memory and not doing anything with
it, then it should get swapped/paged out cleanly and from then on cause
no problems.
A short example (terminal-based single-thread-deamon) is illustrating my
problem:

-------------------memd_test.pl--------
#!/usr/bin/perl -w

use strict;
use warnings;
$|=1;

sub Run {
my @x;
foreach (0..200000) {
push(@x,[33,44,55,66,77,88,99,22,11,33,44,55,66,77,88,99,22,11,33,44,55,
66,77,88,99,22,11]);
}
}

my $c;
while (<STDIN>) {
print $c++;
Run();
}
------------------------

each CR on stdin will run the short subroutine that does nothing but
claiming lot of memory and ... unfortunately ... not releasing it to the
OS after.

If you start this example-programm a few times and feed it with 10 CR
then you end up with:

#top
2461 root 9 0 122m 122m 1228 T 0.0 6.0 0:02.78 0 121m
memd_test2.pl
2492 root 9 0 122m 122m 1228 T 0.0 6.0 0:02.77 0 121m
memd_test2.pl
2564 root 9 0 122m 122m 1228 T 0.0 6.0 0:00.98 0 121m
memd_test2.pl
2570 root 9 0 122m 122m 1228 T 0.0 6.0 0:01.87 0 121m
memd_test2.pl

Why would you do that in the first place if memory is an issue? Either
make each process serve one request and then exit, or don't start 10
processes, just start one and have it handle all ten requests.

....
One single request that needs 200m for about 2 seconds will take this
200m away from the system for the rest of its life.

Make the rest of its life be very short.
I need to implement the process as daemon, cause the startup-time of the
process is very expensive and it might happen that multiple requests
will come in at the same time.

Make it a forking server which does the time consuming start up once, then
forks children upon each request. The children can do the memory consuming
parts, then exit, freeing up their memory.

Xho
 
P

peter pilsl

...

Make the rest of its life be very short.

Make it a forking server which does the time consuming start up once, then
forks children upon each request. The children can do the memory consuming
parts, then exit, freeing up their memory.


thnx. Thats what I did. I was a bit posessed by the idea of a preforking
server and lost the focus somehow.

I now have a forking server thats running very nice and consuming very
little memory ;)

thnx
peter
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top