Rename Perl Process

  • Thread starter Wolfgang Hennerbichler
  • Start date
W

Wolfgang Hennerbichler

Hi perl-people.

I have a process that forks itself by closing the STD[IN|OUT|ERR]
Filehandles, and killing the parent.
This somehow works like this:

close STDIN or die "Can't close STDIN: $!";
close STDERR or die "Can't close STDERR: $!";
close STDOUT or die "Can't close STDOUT: $!";
my $pid=fork();
if ($pid) {
exit;
}
&error ("can't fork: $!") if not defined $pid;
setsid() or die "Can't start a new Session: $!";

Well, that's cool. Unfortunately this process has a lot of forking to
do, and therefore it would be great (especially for monitoring
reasons) if I could rename the daemons (so that they appear with
different names when I do a 'ps auxc' for example) that fork off. The
OS is Linux 2.6, and I can't find much information about that on the
web. Could you please advise?

Thanks in advance,
wogri
 
J

Jürgen Exner

Wolfgang said:
do, and therefore it would be great (especially for monitoring
reasons) if I could rename the daemons (so that they appear with
different names when I do a 'ps auxc' for example) that fork off.

'perldoc perlvar' and look out for $0

jue
 
R

Randal L. Schwartz

Wolfgang> Well, that's cool. Unfortunately this process has a lot of forking to
Wolfgang> do, and therefore it would be great (especially for monitoring
Wolfgang> reasons) if I could rename the daemons (so that they appear with
Wolfgang> different names when I do a 'ps auxc' for example) that fork off. The
Wolfgang> OS is Linux 2.6, and I can't find much information about that on the
Wolfgang> web. Could you please advise?

"perldoc perlvar"

$0 Contains the name of the program being executed.

On some (read: not all) operating systems assigning to $0
modifies the argument area that the "ps" program sees. On some
platforms you may have to use special "ps" options or a
different "ps" to see the changes. Modifying the $0 is more
useful as a way of indicating the current program state than it
is for hiding the program you're running. (Mnemonic: same as sh
and ksh.)

print "Just another Perl hacker,"; # the original
 
J

Jens Thoms Toerring

Wolfgang Hennerbichler said:
I have a process that forks itself by closing the STD[IN|OUT|ERR]
Filehandles, and killing the parent.
This somehow works like this:
close STDIN or die "Can't close STDIN: $!";
close STDERR or die "Can't close STDERR: $!";
close STDOUT or die "Can't close STDOUT: $!";

Do you realize that the error message from die can't make it
out anymore since you already closed STDERR?
my $pid=fork();
if ($pid) {
exit;
}
&error ("can't fork: $!") if not defined $pid;
setsid() or die "Can't start a new Session: $!";

And those error messages also will never appear anywhere since
neither STDOUT nor STDERR are open anymore.

BTW, it looks a bit as if you want to create a daemon, and
the usual way to create one is forking twice (always letting
the parent process exit) and setting a new session in between
the two forks. Moreover, since the setsid() call will make the
process lose its controlling terminal you don't have to expli-
citely close STD(IN|OUT|ERR).
Well, that's cool. Unfortunately this process has a lot of forking to
do, and therefore it would be great (especially for monitoring
reasons) if I could rename the daemons (so that they appear with
different names when I do a 'ps auxc' for example) that fork off. The
OS is Linux 2.6, and I can't find much information about that on the
web. Could you please advise?

It's can't be done via fork() which just (nearly) duplicates your
process. But what you can try is to assign a new value to $0 before
you fork(). I don't think it's guaranteed to work on all UNIX systems
but it seems to work on Linux.
Regards, Jens
 
X

xhoster

Wolfgang Hennerbichler said:
I have a process that forks itself by closing the STD[IN|OUT|ERR]
Filehandles, and killing the parent.
This somehow works like this:
close STDIN or die "Can't close STDIN: $!";
close STDERR or die "Can't close STDERR: $!";
close STDOUT or die "Can't close STDOUT: $!";

Do you realize that the error message from die can't make it
out anymore since you already closed STDERR?

Unless the close of STDERR itself failed.
And those error messages also will never appear anywhere since
neither STDOUT nor STDERR are open anymore.

It is easy enough to comment out the close for testing purposes. Why
make it harder by having to comment out the close, then hunt for all the
places that should have had warnings but didn't and add them in?
BTW, it looks a bit as if you want to create a daemon, and
the usual way to create one is forking twice (always letting
the parent process exit) and setting a new session in between
the two forks. Moreover, since the setsid() call will make the
process lose its controlling terminal you don't have to expli-
citely close STD(IN|OUT|ERR).

That does not appear to be the case.

perl -wle 'use POSIX; fork and exit; POSIX::setsid(); fork and exit; print
"foo"; warn "bar"'

Both print and warning show up fine.


Xho
 
J

Jens Thoms Toerring

Wolfgang Hennerbichler said:
I have a process that forks itself by closing the STD[IN|OUT|ERR]
Filehandles, and killing the parent.
This somehow works like this:
close STDIN or die "Can't close STDIN: $!";
close STDERR or die "Can't close STDERR: $!";
close STDOUT or die "Can't close STDOUT: $!";

Do you realize that the error message from die can't make it
out anymore since you already closed STDERR?
Unless the close of STDERR itself failed.

Well, in that case the program should have died;-)
That does not appear to be the case.
perl -wle 'use POSIX; fork and exit; POSIX::setsid(); fork and exit; print
"foo"; warn "bar"'
Both print and warning show up fine.

Sorry, you're right. I mis-remembered that they would go away
together with the controlling terminal and didn't double-check.

Regards, Jens
 
W

Wolfgang Hennerbichler

And those error messages also will never appear anywhere since
neither STDOUT nor STDERR are open anymore.

Well, I didn't really show you how my real program looks like. There's
also this section:

141 if (defined ($options{l}) and not defined $options{d}) {
142 open (STDERR, ">>$options{l}");
143 STDERR->autoflush(1);
144 }

that recreates the STDERR filehandle and sends it to the destination
of the logfile. So don't worry, my error messages are in a safe
place :)

BTW, it looks a bit as if you want to create a daemon, and
the usual way to create one is forking twice (always letting
the parent process exit).

Why? You might be right, but I don't know the reason for the double-
fork, it works perfectly with the single-fork here.
It's can't be done via fork() which just (nearly) duplicates your
process. But what you can try is to assign a new value to $0 before
you fork(). I don't think it's guaranteed to work on all UNIX systems
but it seems to work on Linux.

It's a linux-only program (it is actually a daemon that collects sflow
data on an internet exchange and it's gotten so important (and big) by
now that I need to identify the different daemons that are dealing
with the sflow data).

thanks folks.
wogri
 
J

Jens Thoms Toerring

Wolfgang Hennerbichler said:
On Aug 17, 4:34 pm, (e-mail address removed) (Jens Thoms Toerring) wrote:
Well, I didn't really show you how my real program looks like. There's
also this section:
141 if (defined ($options{l}) and not defined $options{d}) {
142 open (STDERR, ">>$options{l}");
143 STDERR->autoflush(1);
144 }
that recreates the STDERR filehandle and sends it to the destination
of the logfile. So don't worry, my error messages are in a safe
place :)

I won't;-)
Why? You might be right, but I don't know the reason for the double-
fork, it works perfectly with the single-fork here.

It seems to be the case under Linux, but there seem to be some
other (probably mostly legacy) systems where a single fork()
isn't enough to get rid of the controlling terminal (and where
the daemonthus could be killed when the terminal is closed).
To avoid such problems a double fork() traditionally has done.
But when you're only running this on Linux where the process
group ID differs from the process ID anyway you can forget
about it and just fork() once.
Regards, Jens
 
W

Wolfgang Hennerbichler

It seems to be the case under Linux, but there seem to be some
other (probably mostly legacy) systems where a single fork()
isn't enough to get rid of the controlling terminal (and where
the daemonthus could be killed when the terminal is closed).
To avoid such problems a double fork() traditionally has done.
But when you're only running this on Linux where the process
group ID differs from the process ID anyway you can forget
about it and just fork() once.

Didn't know about that. Thanks a lot for that information, I'll stick
to it.

wogri
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,773
Messages
2,569,594
Members
45,113
Latest member
Vinay KumarNevatia
Top