using IPC::Open3 to write to *and* read from a process...

R

RU

Hi Folks,

I'm currently working on some cluster scripts which, among other things,
need to start and stop oracle. The typical (/bin/sh) nasty cluster
script does something like:

#!/bin/sh
su -m oracle -c "sqlplus / as sysdba"<<_EOF_
startup
_EOF_

-snip-

I'm trying to provide the same (or better) functionality using more-or-
less pure perl. I have come up with a subroutine ("run_process()") which
I will include at the bottom. I'm using select to determine if there is
output on STDOUT or STDERR, and if I can write to the process on STDIN
(if I have input I'd like to send to the process). The problem is, I'm
catching SIGCHLD, and when a SIGCHLD is caught, I stop looking for output
from STDOUT and STDERR. I've tested the subroutine and it seems to work,
but I'm slightly worried that it might be possible to lose output because
of the interruption caused by SIGCHLD. You might wonder why I'm not just
using Expect. I'm trying to put something minimal together that solves
this problem for many cases, and Expect has too much administrative
overhead for me. Anyhow, is there a better/more reliable way of ensuring
I get all the output from the child process?

Also, you may notice that I keep a count of open output filedescriptors
STDOUT and STDERR, and leave the while-loop when the count reaches zero.
Unfortunately this does not work, as STDERR doesn't get closed
(apparently) when the child process terminates, or at least the method I
used doesn't notice it. Any comments?

thanks,

Rob Urban

----------------------------snip-----------------------------------------
package ScriptLib;

use FileHandle;
use IPC::Open3;

sub run_command
{
my ($cmdref, $su_user, $input_ref);

# parse args
while($_ = shift) {
if (/^-cmd/) {
$cmdref = shift;
} elsif (/^-su/) {
$su_user = shift;
} elsif (/^-input/) {
$input_ref = shift;
}
}

my $debug = 1;

my @cmd = @{$cmdref};

if ($su_user) {
unshift(@cmd, 'su', '-m', $su_user, '-c');
}

my @input = defined($input_ref) ? @{$input_ref} : ();
my ($read_fh, $write_fh, $err_fh);

$debug && print "run_command()\n";

$debug && print "CMD: [", join(' ', @cmd), "]\n";

$SIG{PIPE} = 'IGNORE';
$SIG{CHLD} = \&handle_sigchild;

my $child_exited = 0;
sub handle_sigchild
{
$debug && print "got SIGCHLD!!!\n";
$child_exited = 1;
}

$err_fh = FileHandle->new; # only reader and writer are auto-gen'd
my $pid = open3($write_fh, $read_fh, $err_fh, @cmd);

# read output until EOF
my ($rout, $rin, $wout, $win, $eout, $ein);
$rin = $ein = '';
my $nclosed = 0;
my ($buf, $ret, $out, $err);
my ($out_open, $err_open) = (1, 1);

my ($fileno_write, $fileno_err, $fileno_read);

my $have_input = 0;

if (@input) {
$win = '';
$fileno_write = fileno($write_fh);
vec($win, $fileno_write, 1) = 1;
$have_input = 1;
} else {
close($write_fh);
}

my $want_closed = 0;
if (defined($read_fh)) {
$fileno_read = fileno($read_fh);
vec($rin, $fileno_read, 1) = 1;
$want_closed++;
}

if (defined($err_fh)) {
$fileno_err = fileno($err_fh);
vec($ein, $fileno_err, 1) = 1;
$want_closed++;
}

my $input_line;

$debug && print " going into read loop...\n";
while (!$child_exited && ($nclosed < $want_closed)) {
$debug && print "\n**top of while**,nclosed=[$nclosed]\n";
if ($have_input && !@input) {
$debug && print "input exhausted. setting
win=undef\n";
$win = undef;
}

$debug && print "going into select...\n";
my $nfound = select($rout=$rin, $wout=$win, $eout=$ein,
undef);
$debug && print "after select, nfound=[$nfound]\n";

if ($nfound) {
#---------------------------------------------
# STDOUT
#---------------------------------------------
if (vec($rout, $fileno_read, 1)) {
$debug && print "stdout has something...
\n";
$ret = sysread($read_fh, $buf, 512);
$debug && print "read [$ret] bytes\n";
if ($ret == 0) {
$nclosed++;
$debug && print "incrementing
nclosed\n";
$out_open = 0;
$rin = undef;
}
$debug && print " STDOUT: [$buf]\n";
$out .= $buf;
}

#---------------------------------------------
# STDERR
#---------------------------------------------
if (vec($eout, $fileno_err, 1)) {
$debug && print "stderr has something...
\n";
$ret = sysread($err_fh, $buf, 512);
$debug && print "read [$ret] bytes\n";
if ($ret == 0) {
$nclosed++;
$debug && print "incrementing
nclosed\n";
$err_open = 0;
}
$debug && print " STDERR: [$buf]\n";
$err .= $buf;
}

#---------------------------------------------
# STDIN
#---------------------------------------------
if (vec($wout, $fileno_write, 1)) {
$debug && print "stdin is ready for
input...\n";
$input_line = shift(@input)."\n";
$debug && print "INPUT: [$input_line]\n";
$ret = syswrite($write_fh, $input_line);
defined($ret) || die "write failed: $!\n";
$debug && print "wrote [$ret] bytes\n";
}
}
}

defined($input_ref) && close($write_fh);
defined($read_fh) && close($read_fh);
defined($err_fh) && close($err_fh);

waitpid($pid, 0);
my $status = $?;
$debug && print "waitpid returned status [$status]\n";

return ($status/256, $out, $err);
}

1;
 
B

Ben Morrow

Quoth RU said:
I'm currently working on some cluster scripts which, among other things,
need to start and stop oracle. The typical (/bin/sh) nasty cluster
script does something like:

#!/bin/sh
su -m oracle -c "sqlplus / as sysdba"<<_EOF_
startup
_EOF_

Firstly, I know very little about Oracle, but my impression is that
sqlplus just ends up sending SQL to the server? In which case, you can
likely do the whole thing from Perl with DBI, and not bother trying to
script an external binary at all.
I'm trying to provide the same (or better) functionality using more-or-
less pure perl. I have come up with a subroutine ("run_process()") which
I will include at the bottom. I'm using select to determine if there is
output on STDOUT or STDERR, and if I can write to the process on STDIN
(if I have input I'd like to send to the process). The problem is, I'm
catching SIGCHLD, and when a SIGCHLD is caught, I stop looking for output
from STDOUT and STDERR.

Why? Why not just wait() when you get a SIGCHLD, and read stdout and
stderr until they're empty? In principle there could be a full
pipe's-worth of data left to read when the writing process dies.
I've tested the subroutine and it seems to work,
but I'm slightly worried that it might be possible to lose output because
of the interruption caused by SIGCHLD. You might wonder why I'm not just
using Expect. I'm trying to put something minimal together that solves
this problem for many cases, and Expect has too much administrative
overhead for me. Anyhow, is there a better/more reliable way of ensuring
I get all the output from the child process?

A canned solution to many problems like this is IPC::Run. It may well be
worth your while looking at it.
Also, you may notice that I keep a count of open output filedescriptors
STDOUT and STDERR, and leave the while-loop when the count reaches zero.
Unfortunately this does not work, as STDERR doesn't get closed
(apparently) when the child process terminates, or at least the method I
used doesn't notice it.

You probably just haven't emptied the pipe yet. If you'd kept reading,
it would have closed when you'd got all the data.
Any comments?

(General comments follow, as well as those you asked for ;). Not all of
the advice below can be followed at once.)
----------------------------snip-----------------------------------------
package ScriptLib;

It's probably worth avoiding top-level package names. The potential for
conflict is just too great. I tend to keep things under
BMORROW::project::*.
use FileHandle;
use IPC::Open3;

sub run_command
{
my ($cmdref, $su_user, $input_ref);

# parse args
while($_ = shift) {
if (/^-cmd/) {
$cmdref = shift;
} elsif (/^-su/) {
$su_user = shift;
} elsif (/^-input/) {
$input_ref = shift;
}
}

This can be more clearly written

my %args = @_;

and then you use $args{-su} instead of $su_user throughout; or, if you
really want to unpack them into variables,

my ($cmdref, $su_user, $input_ref) = @args{ qw/-cmd -su -input/ };
my $debug = 1;

my @cmd = @{$cmdref};

if ($su_user) {
unshift(@cmd, 'su', '-m', $su_user, '-c');
}

my @input = defined($input_ref) ? @{$input_ref} : ();
my ($read_fh, $write_fh, $err_fh);

$debug && print "run_command()\n";

You should use warn (or carp) for diagnostics. It tells you where you
are, and it goes to STDERR.
$debug && print "CMD: [", join(' ', @cmd), "]\n";

$SIG{PIPE} = 'IGNORE';
$SIG{CHLD} = \&handle_sigchild;

my $child_exited = 0;
sub handle_sigchild

Defining a named sub inside another named sub is a bad idea. It doesn't
quite do what you expect if you refer to a variable in the outer scope
(or, at any rate, it doesn't do what *I* expect :) ). I would just use
an anon sub here: after all, you don't use the name anywhere else.

$SIG{CHLD} = sub {
...;
};

(note the trailing semicolon)
{
$debug && print "got SIGCHLD!!!\n";
$child_exited = 1;
}

$err_fh = FileHandle->new; # only reader and writer are auto-gen'd
my $pid = open3($write_fh, $read_fh, $err_fh, @cmd);

You need to check if open3 succeeded.

You need to set the filehandles to non-blocking mode. Otherwise your
reads below will block until they have a bufferful, and in the meanwhile
the process might be waiting for input from you.
# read output until EOF
my ($rout, $rin, $wout, $win, $eout, $ein);
$rin = $ein = '';
my $nclosed = 0;
my ($buf, $ret, $out, $err);
my ($out_open, $err_open) = (1, 1);

What are these variables for? Apart from the fact that you don't seem to
use the values, in any case a filehandle knows if it's open or not. Ask
it with Scalar::Util::eek:penhandle, or fileno in a case like this where
you know it's not going to be tied.
my ($fileno_write, $fileno_err, $fileno_read);

my $have_input = 0;

if (@input) {
$win = '';
$fileno_write = fileno($write_fh);
vec($win, $fileno_write, 1) = 1;

I would strongly advise against using select() directly. The IO::Select
module provides a much saner interface.
$have_input = 1;

This variable is useless. It is exactly equivalent to $win.
} else {
close($write_fh);
}

my $want_closed = 0;
if (defined($read_fh)) {

defined is the wrong test here. If you want a test at all, you want
openhandle from Scalar::Util, or simply fileno, which is equivalent in
this case.
$fileno_read = fileno($read_fh);
vec($rin, $fileno_read, 1) = 1;
$want_closed++;
}

if (defined($err_fh)) {
$fileno_err = fileno($err_fh);
vec($ein, $fileno_err, 1) = 1;
$want_closed++;
}

my $input_line;

$debug && print " going into read loop...\n";
while (!$child_exited && ($nclosed < $want_closed)) {

As I said above, you don't care about the child exitting. Just wait for
it (with 5.8's safe signals you might as well wait in the signal
handler) and carry on reading. The events you care about are the
filehandles closing.

This arrangement with $want_closed/$nclosed and all the separate
filehandle variables is nasty. Your basic problem is you have a family
of variables that belong in a structure, in this case probably a hash.
If you start with

my %fh;
...
my $pid = open3($fh{write}, $fh{read}, $fh{err}, @cmd);

and make sure you always delete a filehandle from the hash when you
close it then this loop becomes simply

while (keys %fh) {

which is much more pleasant. Of course, an IO::Select object will quite
happily take the place of that hash, as well as removing the need for
the related $*in and $*out variables.
$debug && print "\n**top of while**,nclosed=[$nclosed]\n";
if ($have_input && !@input) {

Here is the only place you use $have_input, and you can instead write

if ($win and !@input) {
$debug && print "input exhausted. setting
win=undef\n";
$win = undef;
}

$debug && print "going into select...\n";
my $nfound = select($rout=$rin, $wout=$win, $eout=$ein,
undef);
$debug && print "after select, nfound=[$nfound]\n";

if ($nfound) {
#---------------------------------------------
# STDOUT
#---------------------------------------------
if (vec($rout, $fileno_read, 1)) {
$debug && print "stdout has something...
\n";
$ret = sysread($read_fh, $buf, 512);

You don't check if sysread failed. Admittedly the most likely failure is
EAGAIN, which you must ignore, but checking for other errors is good
practice.
$debug && print "read [$ret] bytes\n";
if ($ret == 0) {
$nclosed++;
$debug && print "incrementing
nclosed\n";
$out_open = 0;
$rin = undef;
}
$debug && print " STDOUT: [$buf]\n";
$out .= $buf;
}

#---------------------------------------------
# STDERR
#---------------------------------------------
if (vec($eout, $fileno_err, 1)) {
$debug && print "stderr has something...
\n";
$ret = sysread($err_fh, $buf, 512);
$debug && print "read [$ret] bytes\n";
if ($ret == 0) {
$nclosed++;
$debug && print "incrementing
nclosed\n";
$err_open = 0;
}
$debug && print " STDERR: [$buf]\n";
$err .= $buf;
}

#---------------------------------------------
# STDIN
#---------------------------------------------
if (vec($wout, $fileno_write, 1)) {
$debug && print "stdin is ready for
input...\n";
$input_line = shift(@input)."\n";
$debug && print "INPUT: [$input_line]\n";
$ret = syswrite($write_fh, $input_line);
defined($ret) || die "write failed: $!\n";

Here you check too zealously: EAGAIN is not an 'error', it simply tells
you the pipe is full.
$debug && print "wrote [$ret] bytes\n";
}
}
}

defined($input_ref) && close($write_fh);

This line is wrong. You may have closed it already if you'd run out of
input.
defined($read_fh) && close($read_fh);
defined($err_fh) && close($err_fh);

If you'd kept them in a hash:

close $_ for values %fh;

or if you'd used IO::Select

close $_ for $select->handles;

A set of variables with similar names that you keep doing similar things
to is a sure sign that you need to use a data structure.

Ben
 
X

xhoster

Ben Morrow said:
Firstly, I know very little about Oracle, but my impression is that
sqlplus just ends up sending SQL to the server? In which case, you can
likely do the whole thing from Perl with DBI, and not bother trying to
script an external binary at all.

When used in this way, sqlplus does something special, and is not simply a
client connection. It starts up a database that wasn't previously running.
I'm pretty sure that DBI would be unable to do this.

Having looked at the attempted Perl code (snipped) I'm now trying to figure
out why the above is nasty. It looks remarkably nice to me.

Xho

--
-------------------- http://NewsReader.Com/ --------------------
The costs of publication of this article were defrayed in part by the
payment of page charges. This article must therefore be hereby marked
advertisement in accordance with 18 U.S.C. Section 1734 solely to indicate
this fact.
 
A

all mail refused

Hi Folks,

I'm currently working on some cluster scripts which, among other things,
need to start and stop oracle. The typical (/bin/sh) nasty cluster
script does something like:

#!/bin/sh
su -m oracle -c "sqlplus / as sysdba"<<_EOF_
startup
_EOF_

Nasty is right - that clutters syslog with potentially vast numbers
of useless (discuss this opinion with your security bods if nec)
entries of su from root to oracle.

Do the $( $) $) $> $< thing in Perl instead of calling su.



If you kept the record of open fd's structured in relation
to their process
%my_procs_with_fds={
100 => [ 0, 1, 2 ],
500 => [ 0, 1, 2 ]
);
couldn't you scratch all fd's for a particular process even
if you don't see it close?
 
D

Darren Dunham

RU said:
Hi Folks,

I'm currently working on some cluster scripts which, among other things,
need to start and stop oracle. The typical (/bin/sh) nasty cluster
script does something like:

#!/bin/sh
su -m oracle -c "sqlplus / as sysdba"<<_EOF_
startup
_EOF_

-snip-

I'm trying to provide the same (or better) functionality using more-or-
less pure perl. I have come up with a subroutine ("run_process()") which
I will include at the bottom. I'm using select to determine if there is
output on STDOUT or STDERR, and if I can write to the process on STDIN

Do you need to differentiate between STDOUT/STDERR, or is it enough to
just receive all the data between them on a single filehandle?

If so, you could just sysread at the end until you get '0'. That would
guarantee you have read all the data.
Also, you may notice that I keep a count of open output filedescriptors
STDOUT and STDERR, and leave the while-loop when the count reaches zero.
Unfortunately this does not work, as STDERR doesn't get closed
(apparently) when the child process terminates, or at least the method I
used doesn't notice it. Any comments?

I'm not sure what you mean by 'doesn't get closed'. Filehandles are
closed by you explicitly. Perhaps you mean you're not detecting eof on
the read and asking it to close? That would imply that your sysread is
either returning data, or it's blocking. Is that correct? If so, which
is occuring?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,766
Messages
2,569,569
Members
45,043
Latest member
CannalabsCBDReview

Latest Threads

Top