daemonizing a process AND capture stdout, stderr

G

Gyruss

Dear all,

I've written a front end that kicks off various unix processes. I want to
completely daemonize each process so that even if a user were to kill his
front end, the process would continue to run without interruption. This
isn't too difficult to accomplish, there's even a module that will do it all
for you: Proc::Daemon.

The tricky bit is that I want to be able to capture stdout and stderr from
the script in my front end.

How can I daemonize a process but still capture it's output?

Cheers!
 
M

Michele Dondi

I've written a front end that kicks off various unix processes. I want to
completely daemonize each process so that even if a user were to kill his
front end, the process would continue to run without interruption. This
isn't too difficult to accomplish, there's even a module that will do it all
for you: Proc::Daemon.

The tricky bit is that I want to be able to capture stdout and stderr from
the script in my front end.

How can I daemonize a process but still capture it's output?

You, know, I'm not sure about what you're asking: is it about
daemonizing the main script that launches the other processes, or
these processes themselves? However whatever you do, a child will
always have a parent: in some sense "true orphans" do not exist,
because they will be "inherited" by process 0.

Given that it's not entirely clear to me what you really want to do,
the only thing that I can add is that nobody prohibites you to close
any filehandle and then re-open it. In a script of mine that I happen
to have handy I have this snippet:

close STDIN;
close STDOUT;
close STDERR and
open STDERR, '>', $_ or
die "$0: Can't redirect STDERR to `$_': $!\n"
for "$home/.$name-log";

(In this case I only need STDERR for error and informative messages
logging.)


HTH,
Michele
 
G

Gyruss

Michele Dondi said:
You, know, I'm not sure about what you're asking: is it about
daemonizing the main script that launches the other processes, or
these processes themselves? However whatever you do, a child will
always have a parent: in some sense "true orphans" do not exist,
because they will be "inherited" by process 0.

Given that it's not entirely clear to me what you really want to do,

The result I want is to have my child process live on if the parent process
is killed. The parent process should just read stderr and stdout from the
child process. I want to 'daemonize' the child process (it's not really the
right term), not the parent process.
 
B

Brian McCauley

Gyruss said:
I've written a front end that kicks off various unix processes. I want to
completely daemonize each process so that even if a user were to kill his
front end, the process would continue to run without interruption. This
isn't too difficult to accomplish, there's even a module that will do it all
for you: Proc::Daemon.

The tricky bit is that I want to be able to capture stdout and stderr from
the script in my front end.

How can I daemonize a process but still capture it's output?

Am I missing something here? You direct the output to a file and use
File::Tail.
 
A

Anno Siegel

Gyruss said:
[...]
Given that it's not entirely clear to me what you really want to do,

The result I want is to have my child process live on if the parent process
is killed. The parent process should just read stderr and stdout from the
child process. I want to 'daemonize' the child process (it's not really the
right term), not the parent process.

The term "demonize" can mean all sorts of things to different people,
including separation from a controlling terminal, becoming a process
group leader and probably more.

Normally, a process *is* immune to the parent's death. Since the kid
writes to handles the parent will close on termination, it will receive
a sigpipe when that happens. Is that your problem? If so, just ignore
(or handle) the PIPE signal in the kid. Look for "SIG" in perlvar for
details.

Anno
 
G

Gyruss

Brian McCauley said:
Am I missing something here? You direct the output to a file and use
File::Tail.
To be honest I think that's a little crude.

There's some code in the 1997 perl journal that I think should do the trick,
the guts of it is below.

http://www.foo.be/docs/tpj/issues/vol2_1/tpj0201-0008.html

$READ_BITS = ''; # "bitlist" of parent filehandles
my($fh) = ('fh0000'); # indirect filehandle names
my($cr, $cw); # child read and write filehandles
foreach (@{$OPT{hosts}}) {
$cr = $fh++;
$CHILD{$ARG}->{pw} = $fh++;
pipe($cr, $CHILD{$ARG}->{pw}) or abort 'cr/pw pipe';
$CHILD{$ARG}->{pr} = $fh++;
$cw = $fh++;
pipe($CHILD{$ARG}->{pr}, $cw) or abort 'pr/cw pipe';
if ($CHILD{$ARG}->{pid} = fork) { # parent
close $cr;
close $cw;
$CHILD{$ARG}->{pw}->autoflush(1);
vec($READ_BITS, fileno($CHILD{$ARG}->{pr}), 1) = 1;
} elsif (defined($CHILD{$ARG}->{pid})) { # child
close $CHILD{$ARG}->{pr};
close $CHILD{$ARG}->{pw};
open(STDIN, "<&$cr") or abort 'STDIN open';
open(STDOUT, ">&$cw") or abort 'STDOUT open';
open(STDERR, ">&$cw") or abort 'STDERR open';
STDOUT->autoflush(1);
STDERR->autoflush(1);
exec("$LIBDIR/monds_client",$ARG,$PORT)
or abort 'exec';
} else {
abort 'fork';
} # if fork
} # for each monitored machine
 
M

Michele Dondi

The result I want is to have my child process live on if the parent process
is killed. The parent process should just read stderr and stdout from the

Still I do not understand precisely what this has to do with
daemonizing. Childs should survive their parent in any case.
child process. I want to 'daemonize' the child process (it's not really the
right term), not the parent process.

Well, if you capture childs' STD{ERR,OUT} and want them to survive the
main process, thus implying that it may die, thus losing those
STD{ERR,OUT}'s, then I'd say that your logic is weak. Why don't you
simply redirect them to regular files or named pipes and have the
parent read from those or adopt any other from of IPC suitable for
your needs as explained in depth in

perldoc perlipc

instead?


HTH,
Michele
 
B

Brian McCauley

Gyruss said:
To be honest I think that's a little crude.

[ really complex pipe() based solution ]

Does the acronym KISS mean anything to you?
my($cr, $cw); # child read and write filehandles

You should declare all varialbles as lexically scoped in the smallest
applicable scope unless there's a positive reason for doing. What is
the point of decalaring these variables too early?
$cr = $fh++;
$CHILD{$ARG}->{pw} = $fh++;
pipe($cr, $CHILD{$ARG}->{pw}) or abort 'cr/pw pipe';
$CHILD{$ARG}->{pr} = $fh++;
$cw = $fh++;
pipe($CHILD{$ARG}->{pr}, $cw) or abort 'pr/cw pipe';

I recent perls all this complexity and symbolic refereneces are not
needed - pipe() will autovivify pseudo-anonymous filehandles for you.

pipe(my $cr, $CHILD{$ARG}{pw}) or abort 'cr/pw pipe';
pipe($CHILD{$ARG}{pr}, my $cw) or abort 'pr/cw pipe';
open(STDIN, "<&$cr") or abort 'STDIN open';

open(STDIN, '<&', $cr ) or abort 'STDIN open';
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,743
Messages
2,569,478
Members
44,898
Latest member
BlairH7607

Latest Threads

Top