Questions about perl daemons with child processes and open files / signals

N

none

Hi, I have some questions about perl codes
running as daemon which launch a child process.
I need to run a perl script as a daemon. The
script will monitor tcpdump's output. At first
my script was using a temp file to store the
tcpdump output, but I decided to use a pipe
instead (and did not want to run my script piped
on the command line: tcpdump | ./foo . I wanted
to include the pipe to tcpdump in the script).

The best way I found to do this is by creating
a child process for tcpdump, because running:
open LOGTMPFILE, "tcpdump -i ne1 2> /dev/null |";
do not return the control to the script until
tcpdump terminated. I'm doing it in starttcpdump()
and it works very well, but I'm not sure that
it's the best way to do this.

Another thing I wanted is to be able to call
my script like this to terminated the running
daemon:
~$ foo stop

Here I have a problem, what I planned to do
was catching signals in the script to make some
cleanup before terminating the script:
- kill the child process
- close the tcpdump open pipe (LOGTMPFILE)
if I don't do this, tcpdump continues to run
when the script ends. The problem I have is
that only the child process can close the
LOGTMPFILE file. So I tought I should kill the
child process first (which close the tcpdump
pipe) and then ask the child process to close
the parent process. I also tried to close the
parent process, thinking it would automatically
close the child and tcpdump pipe. but it does
not work. Everything I tried seems to hang at
close LOGTMPFILE and never give back the control
to the script.

The best I was able to get, is have both instances
of perl (parent and child) killed, but tcpdump
still running. By the way it seems to work well
if I press CTRL-C will running the script normally
(not as daemon). the tcpdump process is killed.

Here are my questions:

- I'm I doing tcpdump pipe launching the right
way? it's the best way I found to do this. But
it takes more memory (another instance of the
process). Is there a better way. I do not want
to use temp files.

- Is there anything wrong with my daemon mode
subroutine?

- How can I modify my code to be able to kill
everything: parent, child and tcpdump processes.

- Which signal should I use to stop the script?
HUP, INT, QUIT ?

Any suggestions and tips would be greatly appreciated.

Thanks in advance.

Here a very simplified version of my code. I didn't
test it and it may not run correctly. But it
demonstrate what I want to achieve, I have removed
a lot of things for code simplicity like every
"or die ..." tests and irrelevant code:

#!/usr/bin/perl

use FileHandle;
use POSIX;

if ($ARGV[0] eq "stop")
{
print "Terminating process ...\n";
sendterminatesig ();
exit 0;
}

createdaemon();
starttcpdump ();

$SIG{INT} = \&signalhandler;

while (1)
{
# do something
}

exit 0;

sub createdaemon ()
{
# for problems on mounted partitions
chdir '/';

umask 0;

# fork() a child process and have the parent process exit()
my $pid = fork;
exit if $pid;
die "\nError: Couldn't fork: $!\n" unless defined $pid;

open (PIDFILE, ">/var/run/foo.pid");
printf PIDFILE "%d\n", POSIX::getpid();
close (PIDFILE);

# so it doesn't have a controlling terminal.
POSIX::setsid();

open STDIN, '/dev/null';
open STDOUT, '>/dev/null';
open STDERR, '>/dev/null';
}

sub starttcpdump ()
{
if (!defined(my $kidpid = fork()))
{
# fork returned undef, so failed
die "\nError: Cannot fork: $!\n";
}
elsif ($kidpid == 0)
{
# fork returned 0, so this branch is the child
open LOGTMPFILE, "tcpdump -i ne1 2> /dev/null |";

open (PIDFILE, ">/var/run/foo.child.pid");
printf PIDFILE "%d\n", POSIX::getpid();
close (PIDFILE);
}
else
{
# so this branch is the parent
waitpid($kidpid, 0);
}
}

sub sendterminatesig ()
{
if (-e "/var/run/foo.pid")
{
# Open and read pid file
open (PIDFILE, "/var/run/foo.child.pid");
my $childpid = <PIDFILE>;
close (PIDFILE);

# sending message to process
if (kill 0 => $pid)
{
# Here I tried: calling the parent process
# first, so it close the tcpdump pipe, but
# it doesn't work. I also tried calling the
# child process then the parent process, but
# the tcpdump process is not closed. Then I
# tried to call the child process to close the
# pipe (in signalhandle) and ask the child process
# to kill the parent (still in signalhandle).
# None of them works.

kill INT => $childpid;

exit 0;
}
elsif ($! == EPERM)
{ # changed uid
print "\nError: foo (pid:$pid) has escaped my control!\n";
exit 1;
}
elsif ($! == ESRCH)
{ # process not found or zombied
print "\nError: foo (pid:$pid) is deceased.\n";
exit 1;
}
else
{
print "\nError: couldn't check on the status: $!\n";
exit 1;
}
}
else # pid file not found, quit
{
print "\nError: pid file not found!\n";
exit 1;
}
}

sub signalhandler
{
my $signame = shift;

# Child process
if (-e "/var/run/foo.child.pid")
{
open (PIDFILE, "/var/run/foo.pid");
my $pid = <PIDFILE>;
close (PIDFILE);

close (LOGTMPFILE); # Close the tcpdump pipe

kill INT => $pid; # Kill the parent process

unlink("/var/run/foo.child.pid");
}
else # Parent process
{
unlink("/var/run/foo.pid");
}

exit 0;
}
 
N

none

Just a few corrections to my last message:

"...it seems to work well if I press CTRL-C will
running the script normally ..."

Of course I ment "while" not "will".

And I just noticed that I don't need:
use FileHandle;

since I don't use it to manipulate the files.
 
N

none

I just found out that perl has threads support.
I didn't know that and thought that the only
way to run two simultaneous tasks was by
using forks (unlike C/C++). I saw that using
forks is preferable since threads in perl are a
bit flaky (is it true?). But for something as simple
as running the tcpdump pipe I think it could
be a better solution than using fork, to save
memory, Am I wrong?

I think that terminating the script and closing
the tcpdump pipe would also be simplier using
threads. Anyway, any tips and comments on
threads would be usefull. I would still like to know
why my fork method didn't work in case I need it
some other time or if the thread method does not
solve my problem. My other questions are also
still important (is my daemon switching code correct,
which signal should I use to terminate the script (I'm
thinking of HUP rightnow)).

Thanks
 
X

xhoster

none said:
Hi, I have some questions about perl codes
running as daemon which launch a child process.
I need to run a perl script as a daemon. The
script will monitor tcpdump's output. At first
my script was using a temp file to store the
tcpdump output, but I decided to use a pipe
instead (and did not want to run my script piped
on the command line: tcpdump | ./foo . I wanted
to include the pipe to tcpdump in the script).

The best way I found to do this is by creating
a child process for tcpdump, because running:
open LOGTMPFILE, "tcpdump -i ne1 2> /dev/null |";
do not return the control to the script until
tcpdump terminated. I'm doing it in starttcpdump()
and it works very well, but I'm not sure that
it's the best way to do this.

You say that that open doesn't return control to the script (which I can
not replicate on my machine) yet that open is exactly what you use in
starttcpdump(). Are you saying it does return control when used in the
child, but not when used in the parent?
Another thing I wanted is to be able to call
my script like this to terminated the running
daemon:
~$ foo stop

Here I have a problem, what I planned to do
was catching signals in the script to make some
cleanup before terminating the script:
- kill the child process
- close the tcpdump open pipe (LOGTMPFILE)

Since the pipe is open in the child process, killing the child
will automatically close the pipe.
if I don't do this, tcpdump continues to run
when the script ends.

tcpdump should exit when it realizes that the child is no longer listening
to it. On my machine, it takes a few seconds for it to realize this
(I think what happens is that it doesn't realize no one is listening until
it fills up one pipe buffer.) One way to speed up this death is to record
the pid of the tcpdump process (returned by the "open" command) and
explicitly kill it.

$ perl -le 'my $pid=open my $fh, "/usr/sbin/tcpdump |" or die $!; \
foreach (1..10) { print scalar <$fh>}; kill 13, $pid; \
close $fh or die "Died with $\! ", $\!+0, " $?"'

In your case, of course, you would have to record the pid so that it could
killed from the signalhandler.

The problem I have is
that only the child process can close the
LOGTMPFILE file. So I tought I should kill the
child process first (which close the tcpdump
pipe) and then ask the child process to close
the parent process.

Why is there a parent process at all at this point? The parent doesn't
seem to have anything useful to do, so why not exit rather than hanging
around in a waitpid?
I also tried to close the
parent process, thinking it would automatically
close the child and tcpdump pipe. but it does
not work. Everything I tried seems to hang at
close LOGTMPFILE and never give back the control
to the script.

So it hangs upon the open if you don't fork and upon the close if you
do fork? What OS are you using?
The best I was able to get, is have both instances
of perl (parent and child) killed, but tcpdump
still running. By the way it seems to work well
if I press CTRL-C will running the script normally
(not as daemon). the tcpdump process is killed.

Here are my questions:

- I'm I doing tcpdump pipe launching the right
way? it's the best way I found to do this. But
it takes more memory (another instance of the
process). Is there a better way. I do not want
to use temp files.

I don't see what the fork in starttcpdump gets you at all. If you are
going to do it, at least have the parent exit rather than hanging around.
Then you don't need to worry about killing it.
- Is there anything wrong with my daemon mode
subroutine?

I don't know, but I do know that, for debugging purposes, you should
probably not reopen STDERR to /dev/null.
- How can I modify my code to be able to kill
everything: parent, child and tcpdump processes.

Record the pid of tcpdump, and explicitly kill it.
- Which signal should I use to stop the script?
HUP, INT, QUIT ?

Any suggestions and tips would be greatly appreciated. ....
sub signalhandler
{
my $signame = shift;

# Child process
if (-e "/var/run/foo.child.pid")

That file can exist whether you are in the child or not.
{
open (PIDFILE, "/var/run/foo.pid");
my $pid = <PIDFILE>;
close (PIDFILE);

close (LOGTMPFILE); # Close the tcpdump pipe

kill INT => $pid; # Kill the parent process

unlink("/var/run/foo.child.pid");

You have a race condition here. The parent gets killed, which activates
this very same subroutine but in that parent process, then you unlink the
file whose existence is supposed to signal that we are in the child. So
if the killed parent gets to the file test before the child gets to the
unlink, the parent will think it is the child. I don't really know what
that will cause to happen.

Xho
 
N

none

Thanks Xho,
You say that that open doesn't return control to the script (which I can
not replicate on my machine) yet that open is exactly what you use in
starttcpdump(). Are you saying it does return control when used in the
child, but not when used in the parent?

Sorry it wasn't clear, I didn't mean in this code, but if I don't use
fork to
process the pipe in the child process. For exemple if I do this in the
parent
process:

open PIPE, "pipeapp |";
# never return here until pipeapp terminates.
....
Since the pipe is open in the child process, killing the child
will automatically close the pipe.

I tried it and it doesn't work, tcpdump is never killed. That is if I
have the
correct child pid. I my code correct in starttcpdump()?:
open (PIDFILE, ">/var/run/foo.child.pid");
printf PIDFILE "%d\n", POSIX::getpid();
close (PIDFILE);
I think that PIDFILE should contain the pid of the child, I'm I right?
Why is there a parent process at all at this point? The parent doesn't
seem to have anything useful to do, so why not exit rather than hanging
around in a waitpid?

A this point the parent is the main script, no? So it should continue
to
execute and go to the endless loop to process the pipe data. I think I
tried it without the waitpid and it didn't work. Or I'm I completly
lost and
just create another useless parent?
while (1)
{
# do something
}
So it hangs upon the open if you don't fork and upon the close if you
do fork? What OS are you using?

OpenBSD 3.8
That file can exist whether you are in the child or not.

Yes but I kill the child first and it deletes the file, so when the
child ask to kill the parent and file is no longer there and the
process knows it's a parent.
You have a race condition here. The parent gets killed, which activates
this very same subroutine but in that parent process, then you unlink the
file whose existence is supposed to signal that we are in the child. So
if the killed parent gets to the file test before the child gets to the
unlink, the parent will think it is the child. I don't really know what
that will cause to happen.

You're right I should delete the file before killing the parent :) I
don't remember
if I did it correctly in the original code, I'll take a look at it.

By the way, this morning I made some experimentations with threads and
found out that my perl is not compiled with threads enabled. So I think
I'll
stick with forks since I do not want to recompile it and the code will
be
more portable. I still think forks in my case are a useless memory
waste
for such a simple thing as running a background pipe.

Thanks again
 
N

none

... One way to speed up this death is to record
the pid of the tcpdump process (returned by the "open" command) and
explicitly kill it.

Ok I was able to make it work correctly using the tcpdump pid like you
said, thanks a lot :) but the pid returned is not exactly tcpdump's
pid but
a the pid of the sh process running tcpdump. Anyway killing sh also
kills
tcpdump, so it works fine.
Why is there a parent process at all at this point? The parent doesn't
seem to have anything useful to do, so why not exit rather than hanging
around in a waitpid?

I have tested removing the waitpid, and it doesn't work, I get a lot of
filehandle
error messages.
You have a race condition here. The parent gets killed, which activates
this very same subroutine but in that parent process, then you unlink the
file whose existence is supposed to signal that we are in the child. So
if the killed parent gets to the file test before the child gets to the
unlink, the parent will think it is the child. I don't really know what
that will cause to happen.

My original code was correctly done, the pid file was deleted before
killing
the process.

Rightnow I'm using the INT signal to kill the processes, would it be a
better
idea to use HUP instead?

Thanks for your help!
 
X

xhoster

none said:
Ok I was able to make it work correctly using the tcpdump pid like you
said, thanks a lot :) but the pid returned is not exactly tcpdump's
pid but
a the pid of the sh process running tcpdump.

I wondered if that would be the case. I had checked it on my machine
before a posted, and it seemed to the pid of tcpdump itself for me. But
I didn't use the "2>/dev/null", and that is probably what is triggering the
spawning of the shell for you.
Anyway killing sh also
kills
tcpdump, so it works fine.

Thanks, that is good to know.

I have tested removing the waitpid, and it doesn't work, I get a lot of
filehandle
error messages.

Yes, you should replace the waitpid with exit, not just get rid of it.

....
Rightnow I'm using the INT signal to kill the processes, would it be a
better
idea to use HUP instead?

Sorry, I have no opinion/knowledge about that. As long as the sighandler
installed matches the signal sent, I don't see that it makes much
difference.
Thanks for your help!

You're welcome.

Xho
 
X

xhoster

none said:
Thanks Xho,


Sorry it wasn't clear, I didn't mean in this code, but if I don't use
fork to
process the pipe in the child process. For exemple if I do this in the
parent
process:

open PIPE, "pipeapp |";
# never return here until pipeapp terminates.

Are you sure? Are you perhaps confusing the hanging upon close (which
we just solved) with this supposed hanging on open?

warn time(), " I'm about to open pipe, do I return control?"
open my $fh, "pipeapp |" or die $!;
warn time(), " yes, I really did just open the pipe and return control!"
## Or print on an unbuffered handle, but I like warn

A this point the parent is the main script, no?

No. The parent sits in the waitpid, waiting for the child to exit. But
before the child exits, the last thing it does is kill the parent. So the
parent should never get past the waitpid statement. In which case, turn
the waitpid into an exit. (Or don't do this fork at all, I'm still not
convinced that it is necessary.)
So it should continue
to
execute and go to the endless loop to process the pipe data.

The parent doesn't have access to the pipe data. Only the child does.
This is why, when you got rid the waitpid, you got a bunch of file-handle
warnings. The parent and child both returned from starttcpdump and fell
into the infinite loop. When the parent did so, it tried to read from
a handle that it never opened. The child, of course, had no such problems.

I think I
tried it without the waitpid and it didn't work. Or I'm I completly
lost and
just create another useless parent?
while (1)
{
# do something
}

I think you are confused about who executes this loop. As your code is
written, your child falls into this loop. The parent doesn't, because the
parent gets stuck at waitpid until it is killed. When you remove the
waitpid, then both the parent and the child fall into this loop.

OpenBSD 3.8

Sorry, I don't have access to that, so I can't test the OS-specific
aspects.
By the way, this morning I made some experimentations with threads and
found out that my perl is not compiled with threads enabled. So I think
I'll
stick with forks since I do not want to recompile it and the code will
be
more portable. I still think forks in my case are a useless memory
waste
for such a simple thing as running a background pipe.

I don't think you really need the forks (other than the one in
createdaemon).

If I had to guess, I'd say: That on your OS, tcpdump takes a long
time to realize that nothing was on the other end of the pipe after the
reading perl script exits (or maybe it never realizes it, but it is
somewhat hard to believe that--a OS with that bug would probably become
unusable in short order, with all these moribund processes piling up behind
SIGPIPEs.) You introduced the fork to solve that problem, but it didn't
really solve it. Now you have solved that problem, but you
misremembered/misinterpreted why you introduced the fork in the first
place.

I would not worry about the computer memory wasted due to unnecessary
forking, it is probably negligible. I would worry about the human
confusion caused by unnecessary forking!

Xho
 
D

Dougie!

Hey None,

Check out http://poe.perl.org

POE or PERL Object Environment is a framework for doing multitasking in
a limited number of processes. Because threads on Linux mimic the clone
command under the covers, the context switching and cache coherency all
play a part in how threads really work (or don't!)

My POE based code has been known to run 10X faster than Java + Threads.

HTH,

Dougie!!!
 
N

none

(e-mail address removed) wrote:
Are you sure? Are you perhaps confusing the hanging upon close (which
we just solved) with this supposed hanging on open?

It's really weird, I just tested removing the fork and you're right the
program
doesn't hang at open tcpdump pipe. I don't know why the first time I
tried it
it didn't work correctly, maybe I made a mistake... Anyway thanks a lot
for
your help it was really helpful.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,034
Messages
2,570,356
Members
47,002
Latest member
RobertoLip

Latest Threads

Top