Close a Running Sub-Process

M

mumebuhi

I have a problem closing a filehandle, which is a pipe to a forked
process. The forked process basically tails a log file in another
server. I need to stop the child process once a particular line is
found.

The code is as the following:
# start code
my $fh = undef;
my $child_process = "ssh username@host tail --follow=name
file_to_be_tailed"
open $fh, $child_process || Carp::confess("can't open $child_process:
$!");
while (<$fh>) {
chomp;
if (/successful/) {
last;
}
}
close $fh;
# end code

The script will block when it tries to close the filehandle. How do I
force it to close while tail is still running?

Thank you very much.


Buhi
 
A

axel

mumebuhi said:
I have a problem closing a filehandle, which is a pipe to a forked
process. The forked process basically tails a log file in another
server. I need to stop the child process once a particular line is
found.
The code is as the following:

It is not valid Perl code for a start.

use warnings;
use strict;

Would have should you that. Actually it will not compile anyway.
# start code
my $fh = undef;
my $child_process = "ssh username@host tail --follow=name
file_to_be_tailed"

No ; at end of statement.
open $fh, $child_process || Carp::confess("can't open $child_process:
$!");

The format of $child_process shows that you are trying to open a file
for reading, nothing more... and the || has a high prority so as long as
$child_process is true then the right-hand side will be ignored.
while (<$fh>) {
chomp;
if (/successful/) {
last;
}
}
close $fh;
# end code
The script will block when it tries to close the filehandle. How do I
force it to close while tail is still running?

Not sure what is going on... but I suggest you clean up your code
properly first.

Axel
 
M

mumebuhi

# start
use strict;
use warnings;

my $fh = undef;
# there is a '|' at the end
my $child_process = "ssh username@host tail --follow=name
file_to_be_tailed |";
open $fh, $child_process || Carp::confess("can't open $child_process:
$!");
while (<$fh>) {
chomp;
if (/successful/) {
last;
}
}
close $fh;
# end
 
X

xhoster

mumebuhi said:
I have a problem closing a filehandle, which is a pipe to a forked
process. The forked process basically tails a log file in another
server. I need to stop the child process once a particular line is
found.

The code is as the following:

This not the code. Please post real code.
# start code
my $fh = undef;

No need to predeclare it.
my $child_process = "ssh username@host tail --follow=name
file_to_be_tailed"

@host would be interpolated. You need a pipe character at the end
of your string for the open to do what you want. The lack of a trailing
semicolon creates a syntax error.

open $fh, $child_process || Carp::confess("can't open $child_process:
$!");

you have a precedence problem with the ||, it should be "or".
while (<$fh>) {
chomp;
if (/successful/) {
last;
}
}
close $fh;
# end code

The script will block when it tries to close the filehandle. How do I
force it to close while tail is still running?

You capture the pid of the running process (it is the return value of a
pipe open), and then you kill it just prior to the close.

my $pid=open my $fh, $cmd or die $!;
#....
kill 1,$pid;
close $fh;

You can use 2 or 15 instead of 1 to kill it with, but 1 seems to do the job
with generating spurious messages to STDERR on my system. You can't use 13
(SIGPIPE) because if the child honored that signal, you wouldn't have the
problem in the first place.

Xho
 
X

xhoster

You capture the pid of the running process (it is the return value of a
pipe open), and then you kill it just prior to the close.

my $pid=open my $fh, $cmd or die $!;
#....
kill 1,$pid;
close $fh;

Unfortunately, this seems to leave idle processes hanging around
on the remote server. They will go away if the file they are tailing
ever grows enough so that tail -f fills up the pipe buffer, but if that
never happens then they might never get cleaned up. Maybe the safest thing
to do is write a perl emulation of tail which runs on the remote server.
Then you have the termination criteria evaluated at the remove server
rather than the local one.

Xho
 
M

mumebuhi

You capture the pid of the running process (it is the return value of a
This is it. This is the perfect solution for the time being. The
particular remote process, fortunately, does not need to be killed
because it is intended that way. I am with you that this is probably
not a safe if the remote process needs to be cleaned up properly.

Thank you very much, Xho!


Buhi
 
C

Charles DeRykus

mumebuhi said:
This is it. This is the perfect solution for the time being. The
particular remote process, fortunately, does not need to be killed
because it is intended that way. I am with you that this is probably
not a safe if the remote process needs to be cleaned up properly.

'HUP' works but there's a potentially safer Unix idiom using 'TERM' and
'KILL':

kill 'TERM',$pid or kill 'KILL',$pid
or warn "couldn't signal $pid";


Alternatively, returning the remote pid followed by an 'exec' enables
signaling the remote process directly:


my $child_process = "ssh id@host 'echo $$; exec tail --follow=name'"
..
$remote_pid = <$fh>;
while (<$fh>) {
...
if ( /some_condition/ ) {
system "ssh... 'kill -s TERM $remote_pid'"
..


hth,
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,576
Members
45,054
Latest member
LucyCarper

Latest Threads

Top