exec and named pipe questions

Discussion in 'Perl Misc' started by Dave Saville, Sep 7, 2013.

  1. Dave Saville

    Dave Saville Guest

    I am writing a perl daemon on a Raspberry Pi.

    The perl script talks and listens to another process that it has
    started via fork/exec.

    Normally when one forks it is usual to close unneeded file handles -
    the first question then is should one close *all* the open handles if
    you are going to call exec anyway?

    Secondly, I was under the impression that it did not matter in which
    order named pipes are opened. The forked process is reading one named
    pipe and writing to a second. But more often than not my perl script
    hangs trying to open one.

    open my $_STDOUT, '<', 'fifo_stdout' or die "Can't open fifo_stdout
    $!";

    never returns.

    With two xterms I can "echo hi > pipe" and "cat < pipe" and it matters
    not which order I do them in - the first waits until the second runs.
    But surely open should not be trying to read should it?

    TIA
    --
    Regards
    Dave Saville
    Dave Saville, Sep 7, 2013
    #1
    1. Advertising

  2. "Dave Saville" <> writes:
    > I am writing a perl daemon on a Raspberry Pi.
    >
    > The perl script talks and listens to another process that it has
    > started via fork/exec.
    >
    > Normally when one forks it is usual to close unneeded file handles -
    > the first question then is should one close *all* the open handles if
    > you are going to call exec anyway?


    Unintended filehandle inheritance across exec can cause serious
    problems which are difficult to debug, especially as the new program
    could, in turn, pass an accidentally inherited handle further on to
    other program started by it and so forth. The 'usual' problem case
    would be a file handle 'sitting' on some global resource the
    original program needs as well, eg, a listening TCP socket bound to
    some address: When such a filehandle has been leaked to some random
    other process and the process it originated from terminates, the
    program which created the listening socket can't be started until the
    stray file descriptor has been found and eliminated.

    The 'simple' solution to this problem is to set the FD_CLOEXEC flag
    for all long-lived open filehandles of a process except if they're
    supposed to be inherited accross an exec. Usually, perl does this
    automatically (see perlvar documentation for $^F for more details
    about that).

    There's generally no reason to close file handles explictly in Perl as
    this will either be done automatically when the corresponding file
    handle variable goes out of scope or during 'global destruction',
    traditional nonsense in certain Linux man pages notwithstanding
    (close(2) does not imply flushing kernel buffers, hence, close
    returning 'everythings fine' does not mean silent loss of data won't
    happen).

    > Secondly, I was under the impression that it did not matter in which
    > order named pipes are opened. The forked process is reading one named
    > pipe and writing to a second. But more often than not my perl script
    > hangs trying to open one.
    >
    > open my $_STDOUT, '<', 'fifo_stdout' or die "Can't open fifo_stdout
    > $!";


    Opening a FIFO for reading will block until 'something else' opens it
    for writing.
    Rainer Weikusat, Sep 7, 2013
    #2
    1. Advertising

  3. Dave Saville

    Dave Saville Guest

    On Sat, 7 Sep 2013 15:29:13 UTC, Rainer Weikusat
    <> wrote:

    <snip>
    > The 'simple' solution to this problem is to set the FD_CLOEXEC flag
    > for all long-lived open filehandles of a process except if they're
    > supposed to be inherited accross an exec. Usually, perl does this
    > automatically (see perlvar documentation for $^F for more details
    > about that).


    <snip>

    Thank you for the explanation - will look into that reference.

    > > Secondly, I was under the impression that it did not matter in which
    > > order named pipes are opened. The forked process is reading one named
    > > pipe and writing to a second. But more often than not my perl script
    > > hangs trying to open one.
    > >
    > > open my $_STDOUT, '<', 'fifo_stdout' or die "Can't open fifo_stdout
    > > $!";

    >
    > Opening a FIFO for reading will block until 'something else' opens it
    > for writing.


    I had worked that out by now :) - Google suggests cheating by opening
    +< which seems to work fine.

    --
    Regards
    Dave Saville
    Dave Saville, Sep 7, 2013
    #3
  4. On 9/7/2013 7:14 AM, Dave Saville wrote:
    > ...

    But more often than not my perl script
    > hangs trying to open one.
    >
    > open my $_STDOUT, '<', 'fifo_stdout' or die "Can't open fifo_stdout
    > $!";
    >
    > never returns.
    >


    Since the open for read blocks until it's created on the other end, you
    may want to set the handle non-blocking in case the writer's open
    failed. (You could add a wait/retry with a timeout to recover from
    a delayed open)

    use Fcntl;
    sysopen($_STDOUT, "fifo_stdout", O_RDONLY|O_NONBLOCK) or die ...'

    This also avoids the ambiguity of opening nonblocking via '+<' but then
    not knowing if the writer closed the handle.

    --
    Charles DeRykus
    Charles DeRykus, Sep 7, 2013
    #4
  5. Dave Saville

    Dave Saville Guest

    On Sat, 7 Sep 2013 18:36:06 UTC, Ben Morrow <> wrote:

    >
    > Quoth "Dave Saville" <>:
    > > On Sat, 7 Sep 2013 15:29:13 UTC, Rainer Weikusat
    > > <> wrote:
    > >
    > > > Opening a FIFO for reading will block until 'something else' opens it
    > > > for writing.

    > >
    > > I had worked that out by now :) - Google suggests cheating by opening
    > > +< which seems to work fine.

    >
    > The only problem with this is that you will not get an EOF when the
    > other end closes the fifo, since you still have it open for writing.


    Hi Ben

    In this particular case that is not a problem.

    --
    Regards
    Dave Saville
    Dave Saville, Sep 8, 2013
    #5
  6. Dave Saville

    Dave Saville Guest

    On Sat, 7 Sep 2013 18:35:05 UTC, Ben Morrow <> wrote:

    >
    > Quoth "Dave Saville" <>:
    > > I am writing a perl daemon on a Raspberry Pi.
    > >
    > > The perl script talks and listens to another process that it has
    > > started via fork/exec.
    > >
    > > Normally when one forks it is usual to close unneeded file handles -
    > > the first question then is should one close *all* the open handles if
    > > you are going to call exec anyway?

    >
    > You should close any filehandles which you don't want open in the child
    > and which are not marked close-on-exec. Perl filehandles usually are,
    > except for STDIN, STDOUT and STDERR; this can be changed using $^F (a
    > bad idea) or with fcntl. See Fcntl and fcntl(2).
    >


    Ah, that makes sense. I have put specific closes in anyway. I was just
    wondering about use counts if everything gets wiped by exec().

    > Obviously you also need to ensure any handles you *do* want the execed
    > process to inherit are *not* marked close-on-exec.
    >
    > > Secondly, I was under the impression that it did not matter in which
    > > order named pipes are opened. The forked process is reading one named
    > > pipe and writing to a second. But more often than not my perl script
    > > hangs trying to open one.
    > >
    > > open my $_STDOUT, '<', 'fifo_stdout' or die "Can't open fifo_stdout
    > > $!";
    > >
    > > never returns.

    >
    > It should return once something has opened the pipe for writing. Does
    > this not happen?


    Well it would if there were not another pipe in the other direction.
    :) Deadlock. One way around it was to put the open into another
    thread - so it does not matter if that blocks and the other was the +<
    trick. I will try Charles' O_NONBLOCK as well.

    --
    Regards
    Dave Saville
    Dave Saville, Sep 8, 2013
    #6
  7. Dave Saville

    Dave Saville Guest

    On Sat, 7 Sep 2013 20:42:33 UTC, Charles DeRykus <>
    wrote:

    > On 9/7/2013 7:14 AM, Dave Saville wrote:
    > > ...

    > But more often than not my perl script
    > > hangs trying to open one.
    > >
    > > open my $_STDOUT, '<', 'fifo_stdout' or die "Can't open fifo_stdout
    > > $!";
    > >
    > > never returns.
    > >

    >
    > Since the open for read blocks until it's created on the other end, you
    > may want to set the handle non-blocking in case the writer's open
    > failed. (You could add a wait/retry with a timeout to recover from
    > a delayed open)
    >
    > use Fcntl;
    > sysopen($_STDOUT, "fifo_stdout", O_RDONLY|O_NONBLOCK) or die ...'
    >
    > This also avoids the ambiguity of opening nonblocking via '+<' but then
    > not knowing if the writer closed the handle.
    >


    Thanks Charles, works a treat - once I put a check for defined in the
    read loop :) Never used non blocking I/O before.
    --
    Regards
    Dave Saville
    Dave Saville, Sep 8, 2013
    #7
  8. Charles DeRykus <> writes:

    [FIFO]

    > Since the open for read blocks until it's created on the other end,
    > you may want to set the handle non-blocking in case the writer's open
    > failed. (You could add a wait/retry with a timeout to recover from
    > a delayed open)
    >
    > use Fcntl;
    > sysopen($_STDOUT, "fifo_stdout", O_RDONLY|O_NONBLOCK) or die ...'
    >
    > This also avoids the ambiguity of opening nonblocking via '+<' but then
    > not knowing if the writer closed the handle.


    Using a mode of +< is not 'a non-blocking open' but an open in
    'read-write mode'. According to the Linux fifo(4) man page,
    POSIX/UNIX(*) leaves the behaviour of that undefined. At least on
    Linux, it will succeed on the grounds that the process opening the
    FIFO in this way is both a reader and a writer. Provided a FIFO was
    successfully opened for reading in blocking mode, an EOF condition
    (read returning 0 bytes read) will occur once the last writer closed
    the FIFO. Obviously, this can never be observed on an O_RDWR-mode
    filecriptor referring to the FIFO.
    Rainer Weikusat, Sep 8, 2013
    #8
  9. Dave Saville

    Dave Saville Guest

    On Sun, 8 Sep 2013 12:25:45 UTC, Ben Morrow <> wrote:

    >
    > Quoth "Dave Saville" <>:
    > > On Sat, 7 Sep 2013 18:35:05 UTC, Ben Morrow <> wrote:
    > > > Quoth "Dave Saville" <>:
    > > > >
    > > > > open my $_STDOUT, '<', 'fifo_stdout' or die "Can't open fifo_stdout
    > > > > $!";
    > > > >
    > > > > never returns.
    > > >
    > > > It should return once something has opened the pipe for writing. Does
    > > > this not happen?

    > >
    > > Well it would if there were not another pipe in the other direction.
    > > :) Deadlock.

    >
    > If you want two-way communication you might be better off with a
    > Unix-domain socket instead.
    >


    Hi Ben

    And how would one persuade another process to use the socket for its
    STDOUT?

    exec 'foo -input=input_fifo > output_fifo';


    --
    Regards
    Dave Saville
    Dave Saville, Sep 8, 2013
    #9
  10. "Dave Saville" <> writes:
    > On Sun, 8 Sep 2013 12:25:45 UTC, Ben Morrow <> wrote:
    >
    >>
    >> Quoth "Dave Saville" <>:
    >> > On Sat, 7 Sep 2013 18:35:05 UTC, Ben Morrow <> wrote:
    >> > > Quoth "Dave Saville" <>:
    >> > > >
    >> > > > open my $_STDOUT, '<', 'fifo_stdout' or die "Can't open fifo_stdout
    >> > > > $!";
    >> > > >
    >> > > > never returns.
    >> > >
    >> > > It should return once something has opened the pipe for writing. Does
    >> > > this not happen?
    >> >
    >> > Well it would if there were not another pipe in the other direction.
    >> > :) Deadlock.

    >>
    >> If you want two-way communication you might be better off with a
    >> Unix-domain socket instead.
    >>

    >
    > Hi Ben
    >
    > And how would one persuade another process to use the socket for its
    > STDOUT?
    >
    > exec 'foo -input=input_fifo > output_fifo';


    By redirecting its standard output file descriptor (1, associated with
    the STDOUT stream in perl) to the socket file descriptor:

    open(STDOUT, '>&', $sockfh);
    Rainer Weikusat, Sep 8, 2013
    #10
  11. On 9/8/2013 4:07 AM, Rainer Weikusat wrote:
    > Charles DeRykus <> writes:
    >
    > [FIFO]
    >
    >> Since the open for read blocks until it's created on the other end,
    >> you may want to set the handle non-blocking in case the writer's open
    >> failed. (You could add a wait/retry with a timeout to recover from
    >> a delayed open)
    >>
    >> use Fcntl;
    >> sysopen($_STDOUT, "fifo_stdout", O_RDONLY|O_NONBLOCK) or die ...'
    >>
    >> This also avoids the ambiguity of opening nonblocking via '+<' but then
    >> not knowing if the writer closed the handle.

    >
    > Using a mode of +< is not 'a non-blocking open' but an open in
    > 'read-write mode'. According to the Linux fifo(4) man page,
    > ...


    Point taken but of course I was just using "non-blocking" in a
    descriptive fashion to describe a perl open that wouldn't block immediately.

    [ At times I just have to side with Humpty-Dumpty's looseness rather
    than Alice's bafflement. ]

    --
    Charles DeRykus
    Charles DeRykus, Sep 9, 2013
    #11
  12. Charles DeRykus <> writes:
    > On 9/8/2013 4:07 AM, Rainer Weikusat wrote:
    >> Charles DeRykus <> writes:
    >>
    >> [FIFO]
    >>
    >>> Since the open for read blocks until it's created on the other end,
    >>> you may want to set the handle non-blocking in case the writer's open
    >>> failed. (You could add a wait/retry with a timeout to recover from
    >>> a delayed open)
    >>>
    >>> use Fcntl;
    >>> sysopen($_STDOUT, "fifo_stdout", O_RDONLY|O_NONBLOCK) or die ...'
    >>>
    >>> This also avoids the ambiguity of opening nonblocking via '+<' but then
    >>> not knowing if the writer closed the handle.

    >>
    >> Using a mode of +< is not 'a non-blocking open' but an open in
    >> 'read-write mode'. According to the Linux fifo(4) man page,
    >> ...

    >
    > Point taken but of course I was just using "non-blocking" in a
    > descriptive fashion to describe a perl open that wouldn't block
    > immediately.


    Using the same term with two different meanings in the same text is at
    least very confusing.

    > [ At times I just have to side with Humpty-Dumpty's looseness rather
    > than Alice's bafflement. ]


    ?
    Rainer Weikusat, Sep 9, 2013
    #12
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Hoegje
    Replies:
    2
    Views:
    22,513
    Gianni Mariani
    Dec 5, 2003
  2. lee, wonsun
    Replies:
    1
    Views:
    478
    Jack Klein
    Nov 2, 2004
  3. Rafael Giannetti Viotti

    Subprocess and pipe-fork-exec primitive

    Rafael Giannetti Viotti, Jul 30, 2007, in forum: Python
    Replies:
    2
    Views:
    2,998
    Rafael V.
    Aug 1, 2007
  4. waltbrad
    Replies:
    3
    Views:
    272
    waltbrad
    Mar 17, 2008
  5. '2+
    Replies:
    0
    Views:
    205
Loading...

Share This Page