Close a Running Sub-Process

Discussion in 'Perl Misc' started by mumebuhi, Aug 30, 2006.

  1. mumebuhi

    mumebuhi Guest

    I have a problem closing a filehandle, which is a pipe to a forked
    process. The forked process basically tails a log file in another
    server. I need to stop the child process once a particular line is
    found.

    The code is as the following:
    # start code
    my $fh = undef;
    my $child_process = "ssh username@host tail --follow=name
    file_to_be_tailed"
    open $fh, $child_process || Carp::confess("can't open $child_process:
    $!");
    while (<$fh>) {
    chomp;
    if (/successful/) {
    last;
    }
    }
    close $fh;
    # end code

    The script will block when it tries to close the filehandle. How do I
    force it to close while tail is still running?

    Thank you very much.


    Buhi
     
    mumebuhi, Aug 30, 2006
    #1
    1. Advertising

  2. mumebuhi

    Guest

    mumebuhi <> wrote:
    > I have a problem closing a filehandle, which is a pipe to a forked
    > process. The forked process basically tails a log file in another
    > server. I need to stop the child process once a particular line is
    > found.


    > The code is as the following:


    It is not valid Perl code for a start.

    use warnings;
    use strict;

    Would have should you that. Actually it will not compile anyway.

    > # start code
    > my $fh = undef;
    > my $child_process = "ssh username@host tail --follow=name
    > file_to_be_tailed"


    No ; at end of statement.

    > open $fh, $child_process || Carp::confess("can't open $child_process:
    > $!");


    The format of $child_process shows that you are trying to open a file
    for reading, nothing more... and the || has a high prority so as long as
    $child_process is true then the right-hand side will be ignored.

    > while (<$fh>) {
    > chomp;
    > if (/successful/) {
    > last;
    > }
    > }
    > close $fh;
    > # end code


    > The script will block when it tries to close the filehandle. How do I
    > force it to close while tail is still running?


    Not sure what is going on... but I suggest you clean up your code
    properly first.

    Axel
     
    , Aug 30, 2006
    #2
    1. Advertising

  3. mumebuhi

    mumebuhi Guest

    # start
    use strict;
    use warnings;

    my $fh = undef;
    # there is a '|' at the end
    my $child_process = "ssh username@host tail --follow=name
    file_to_be_tailed |";
    open $fh, $child_process || Carp::confess("can't open $child_process:
    $!");
    while (<$fh>) {
    chomp;
    if (/successful/) {
    last;
    }
    }
    close $fh;
    # end
     
    mumebuhi, Aug 30, 2006
    #3
  4. mumebuhi

    Guest

    "mumebuhi" <> wrote:
    > I have a problem closing a filehandle, which is a pipe to a forked
    > process. The forked process basically tails a log file in another
    > server. I need to stop the child process once a particular line is
    > found.
    >
    > The code is as the following:


    This not the code. Please post real code.

    > # start code
    > my $fh = undef;


    No need to predeclare it.

    > my $child_process = "ssh username@host tail --follow=name
    > file_to_be_tailed"


    @host would be interpolated. You need a pipe character at the end
    of your string for the open to do what you want. The lack of a trailing
    semicolon creates a syntax error.


    > open $fh, $child_process || Carp::confess("can't open $child_process:
    > $!");


    you have a precedence problem with the ||, it should be "or".

    > while (<$fh>) {
    > chomp;
    > if (/successful/) {
    > last;
    > }
    > }
    > close $fh;
    > # end code
    >
    > The script will block when it tries to close the filehandle. How do I
    > force it to close while tail is still running?


    You capture the pid of the running process (it is the return value of a
    pipe open), and then you kill it just prior to the close.

    my $pid=open my $fh, $cmd or die $!;
    #....
    kill 1,$pid;
    close $fh;

    You can use 2 or 15 instead of 1 to kill it with, but 1 seems to do the job
    with generating spurious messages to STDERR on my system. You can't use 13
    (SIGPIPE) because if the child honored that signal, you wouldn't have the
    problem in the first place.

    Xho

    --
    -------------------- http://NewsReader.Com/ --------------------
    Usenet Newsgroup Service $9.95/Month 30GB
     
    , Aug 30, 2006
    #4
  5. mumebuhi

    Guest

    wrote:
    > "mumebuhi" <> wrote:
    > >
    > > The script will block when it tries to close the filehandle. How do I
    > > force it to close while tail is still running?

    >
    > You capture the pid of the running process (it is the return value of a
    > pipe open), and then you kill it just prior to the close.
    >
    > my $pid=open my $fh, $cmd or die $!;
    > #....
    > kill 1,$pid;
    > close $fh;


    Unfortunately, this seems to leave idle processes hanging around
    on the remote server. They will go away if the file they are tailing
    ever grows enough so that tail -f fills up the pipe buffer, but if that
    never happens then they might never get cleaned up. Maybe the safest thing
    to do is write a perl emulation of tail which runs on the remote server.
    Then you have the termination criteria evaluated at the remove server
    rather than the local one.

    Xho

    --
    -------------------- http://NewsReader.Com/ --------------------
    Usenet Newsgroup Service $9.95/Month 30GB
     
    , Aug 31, 2006
    #5
  6. mumebuhi

    mumebuhi Guest

    > > You capture the pid of the running process (it is the return value of a
    > > pipe open), and then you kill it just prior to the close.
    > >
    > > my $pid=open my $fh, $cmd or die $!;
    > > #....
    > > kill 1,$pid;
    > > close $fh;


    This is it. This is the perfect solution for the time being. The
    particular remote process, fortunately, does not need to be killed
    because it is intended that way. I am with you that this is probably
    not a safe if the remote process needs to be cleaned up properly.

    Thank you very much, Xho!


    Buhi
     
    mumebuhi, Aug 31, 2006
    #6
  7. mumebuhi wrote:
    >>> You capture the pid of the running process (it is the return value of a
    >>> pipe open), and then you kill it just prior to the close.
    >>>
    >>> my $pid=open my $fh, $cmd or die $!;
    >>> #....
    >>> kill 1,$pid;
    >>> close $fh;

    >
    > This is it. This is the perfect solution for the time being. The
    > particular remote process, fortunately, does not need to be killed
    > because it is intended that way. I am with you that this is probably
    > not a safe if the remote process needs to be cleaned up properly.
    >


    'HUP' works but there's a potentially safer Unix idiom using 'TERM' and
    'KILL':

    kill 'TERM',$pid or kill 'KILL',$pid
    or warn "couldn't signal $pid";


    Alternatively, returning the remote pid followed by an 'exec' enables
    signaling the remote process directly:


    my $child_process = "ssh id@host 'echo $$; exec tail --follow=name'"
    ..
    $remote_pid = <$fh>;
    while (<$fh>) {
    ...
    if ( /some_condition/ ) {
    system "ssh... 'kill -s TERM $remote_pid'"
    ..


    hth,
    --
    Charles DeRykus
     
    Charles DeRykus, Sep 5, 2006
    #7
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Jona
    Replies:
    2
    Views:
    804
  2. Daniel Timothy Bentley

    correct way of running a sub-process

    Daniel Timothy Bentley, Feb 12, 2004, in forum: Python
    Replies:
    4
    Views:
    334
    Donn Cave
    Feb 12, 2004
  3. Ben
    Replies:
    2
    Views:
    892
  4. Lawrence D'Oliveiro

    Death To Sub-Sub-Sub-Directories!

    Lawrence D'Oliveiro, May 5, 2011, in forum: Java
    Replies:
    92
    Views:
    2,036
    Lawrence D'Oliveiro
    May 20, 2011
  5. Iñaki Baz Castillo
    Replies:
    7
    Views:
    860
    Iñaki Baz Castillo
    Jan 12, 2010
Loading...

Share This Page