exec and named pipe questions

D

Dave Saville

I am writing a perl daemon on a Raspberry Pi.

The perl script talks and listens to another process that it has
started via fork/exec.

Normally when one forks it is usual to close unneeded file handles -
the first question then is should one close *all* the open handles if
you are going to call exec anyway?

Secondly, I was under the impression that it did not matter in which
order named pipes are opened. The forked process is reading one named
pipe and writing to a second. But more often than not my perl script
hangs trying to open one.

open my $_STDOUT, '<', 'fifo_stdout' or die "Can't open fifo_stdout
$!";

never returns.

With two xterms I can "echo hi > pipe" and "cat < pipe" and it matters
not which order I do them in - the first waits until the second runs.
But surely open should not be trying to read should it?

TIA
 
R

Rainer Weikusat

Dave Saville said:
I am writing a perl daemon on a Raspberry Pi.

The perl script talks and listens to another process that it has
started via fork/exec.

Normally when one forks it is usual to close unneeded file handles -
the first question then is should one close *all* the open handles if
you are going to call exec anyway?

Unintended filehandle inheritance across exec can cause serious
problems which are difficult to debug, especially as the new program
could, in turn, pass an accidentally inherited handle further on to
other program started by it and so forth. The 'usual' problem case
would be a file handle 'sitting' on some global resource the
original program needs as well, eg, a listening TCP socket bound to
some address: When such a filehandle has been leaked to some random
other process and the process it originated from terminates, the
program which created the listening socket can't be started until the
stray file descriptor has been found and eliminated.

The 'simple' solution to this problem is to set the FD_CLOEXEC flag
for all long-lived open filehandles of a process except if they're
supposed to be inherited accross an exec. Usually, perl does this
automatically (see perlvar documentation for $^F for more details
about that).

There's generally no reason to close file handles explictly in Perl as
this will either be done automatically when the corresponding file
handle variable goes out of scope or during 'global destruction',
traditional nonsense in certain Linux man pages notwithstanding
(close(2) does not imply flushing kernel buffers, hence, close
returning 'everythings fine' does not mean silent loss of data won't
happen).
Secondly, I was under the impression that it did not matter in which
order named pipes are opened. The forked process is reading one named
pipe and writing to a second. But more often than not my perl script
hangs trying to open one.

open my $_STDOUT, '<', 'fifo_stdout' or die "Can't open fifo_stdout
$!";

Opening a FIFO for reading will block until 'something else' opens it
for writing.
 
D

Dave Saville

On Sat, 7 Sep 2013 15:29:13 UTC, Rainer Weikusat

The 'simple' solution to this problem is to set the FD_CLOEXEC flag
for all long-lived open filehandles of a process except if they're
supposed to be inherited accross an exec. Usually, perl does this
automatically (see perlvar documentation for $^F for more details
about that).

<snip>

Thank you for the explanation - will look into that reference.
Opening a FIFO for reading will block until 'something else' opens it
for writing.

I had worked that out by now :) - Google suggests cheating by opening
+< which seems to work fine.
 
C

Charles DeRykus

But more often than not my perl script
hangs trying to open one.

open my $_STDOUT, '<', 'fifo_stdout' or die "Can't open fifo_stdout
$!";

never returns.

Since the open for read blocks until it's created on the other end, you
may want to set the handle non-blocking in case the writer's open
failed. (You could add a wait/retry with a timeout to recover from
a delayed open)

use Fcntl;
sysopen($_STDOUT, "fifo_stdout", O_RDONLY|O_NONBLOCK) or die ...'

This also avoids the ambiguity of opening nonblocking via '+<' but then
not knowing if the writer closed the handle.
 
D

Dave Saville

The only problem with this is that you will not get an EOF when the
other end closes the fifo, since you still have it open for writing.

Hi Ben

In this particular case that is not a problem.
 
D

Dave Saville

You should close any filehandles which you don't want open in the child
and which are not marked close-on-exec. Perl filehandles usually are,
except for STDIN, STDOUT and STDERR; this can be changed using $^F (a
bad idea) or with fcntl. See Fcntl and fcntl(2).

Ah, that makes sense. I have put specific closes in anyway. I was just
wondering about use counts if everything gets wiped by exec().
Obviously you also need to ensure any handles you *do* want the execed
process to inherit are *not* marked close-on-exec.


It should return once something has opened the pipe for writing. Does
this not happen?

Well it would if there were not another pipe in the other direction.
:) Deadlock. One way around it was to put the open into another
thread - so it does not matter if that blocks and the other was the +<
trick. I will try Charles' O_NONBLOCK as well.
 
D

Dave Saville

But more often than not my perl script

Since the open for read blocks until it's created on the other end, you
may want to set the handle non-blocking in case the writer's open
failed. (You could add a wait/retry with a timeout to recover from
a delayed open)

use Fcntl;
sysopen($_STDOUT, "fifo_stdout", O_RDONLY|O_NONBLOCK) or die ...'

This also avoids the ambiguity of opening nonblocking via '+<' but then
not knowing if the writer closed the handle.

Thanks Charles, works a treat - once I put a check for defined in the
read loop :) Never used non blocking I/O before.
 
R

Rainer Weikusat

[FIFO]
Since the open for read blocks until it's created on the other end,
you may want to set the handle non-blocking in case the writer's open
failed. (You could add a wait/retry with a timeout to recover from
a delayed open)

use Fcntl;
sysopen($_STDOUT, "fifo_stdout", O_RDONLY|O_NONBLOCK) or die ...'

This also avoids the ambiguity of opening nonblocking via '+<' but then
not knowing if the writer closed the handle.

Using a mode of +< is not 'a non-blocking open' but an open in
'read-write mode'. According to the Linux fifo(4) man page,
POSIX/UNIX(*) leaves the behaviour of that undefined. At least on
Linux, it will succeed on the grounds that the process opening the
FIFO in this way is both a reader and a writer. Provided a FIFO was
successfully opened for reading in blocking mode, an EOF condition
(read returning 0 bytes read) will occur once the last writer closed
the FIFO. Obviously, this can never be observed on an O_RDWR-mode
filecriptor referring to the FIFO.
 
D

Dave Saville

If you want two-way communication you might be better off with a
Unix-domain socket instead.

Hi Ben

And how would one persuade another process to use the socket for its
STDOUT?

exec 'foo -input=input_fifo > output_fifo';
 
R

Rainer Weikusat

Dave Saville said:
Hi Ben

And how would one persuade another process to use the socket for its
STDOUT?

exec 'foo -input=input_fifo > output_fifo';

By redirecting its standard output file descriptor (1, associated with
the STDOUT stream in perl) to the socket file descriptor:

open(STDOUT, '>&', $sockfh);
 
C

Charles DeRykus

[FIFO]
Since the open for read blocks until it's created on the other end,
you may want to set the handle non-blocking in case the writer's open
failed. (You could add a wait/retry with a timeout to recover from
a delayed open)

use Fcntl;
sysopen($_STDOUT, "fifo_stdout", O_RDONLY|O_NONBLOCK) or die ...'

This also avoids the ambiguity of opening nonblocking via '+<' but then
not knowing if the writer closed the handle.

Using a mode of +< is not 'a non-blocking open' but an open in
'read-write mode'. According to the Linux fifo(4) man page,
...

Point taken but of course I was just using "non-blocking" in a
descriptive fashion to describe a perl open that wouldn't block immediately.

[ At times I just have to side with Humpty-Dumpty's looseness rather
than Alice's bafflement. ]
 
R

Rainer Weikusat

Charles DeRykus said:
[FIFO]
Since the open for read blocks until it's created on the other end,
you may want to set the handle non-blocking in case the writer's open
failed. (You could add a wait/retry with a timeout to recover from
a delayed open)

use Fcntl;
sysopen($_STDOUT, "fifo_stdout", O_RDONLY|O_NONBLOCK) or die ...'

This also avoids the ambiguity of opening nonblocking via '+<' but then
not knowing if the writer closed the handle.

Using a mode of +< is not 'a non-blocking open' but an open in
'read-write mode'. According to the Linux fifo(4) man page,
...

Point taken but of course I was just using "non-blocking" in a
descriptive fashion to describe a perl open that wouldn't block
immediately.

Using the same term with two different meanings in the same text is at
least very confusing.
[ At times I just have to side with Humpty-Dumpty's looseness rather
than Alice's bafflement. ]

?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,767
Messages
2,569,572
Members
45,045
Latest member
DRCM

Latest Threads

Top