Asynchronous forking(?) processes on Windows

T

Tim

O.K., I'm starting to lose it. I've read a lot of threads on this list
regarding forking processes on Windows. I'm still having problems even
with the info I have found. I'm asking for some direction.

I am trying to spawn several children processes and execute a the same
command on each child. I want to reap (read) the results of each of
these children and wait for each to end in their own order. The
command I am executing can potentially sent out a lot of output. I
execute the command using backticks (`) in order to get the output.

The following perl code will work to a certain extent on *NIX platorms
but will lockup / hang on Windows (XP to be specific). It appears to
have the problem with the pipe but I'm not entirely sure what's going
on. I am using the latest Acitvestate perl.

Bottom line... does anyone have a chunk of perl that demonstrates what
I'm trying to do but may be more robust in the Windows environment. If
you can help me figure this out I will sing your praises while dancing
in my cube. Thanks - Tim

### snippet that need rewriting / robustifying ###

$cmd = "p4 -p mymachine.com:50720 -c client3 info";

my $nchildren = 7;
my ($active, $ichild, $pid, @rfhs, $rfh);

$active = 0;
for( $ichild = $nchildren; $ichild > 0; $ichild-- )
{
pipe RFH, WFH;

if( $pid = fork ) {
print "Forked pid $pid.\n";
open $rfhs{ $pid }, "<&RFH";
close WFH;
$active++;
} elsif( defined $pid ) {
close RFH;
print WFH `$cmd 2>&1`;
close WFH;
exit;
} else {
print "fork failed!\n";
exit 1;
}
}

while( $active > 0 ) {
$pid = wait;
$rfh = $rfhs{ $pid };
print "\n$pid completed. It's output is:\n", <$rfh>;
close $rfh;
$active--;
}
 
X

xhoster

Tim said:
O.K., I'm starting to lose it. I've read a lot of threads on this list
regarding forking processes on Windows. I'm still having problems even
with the info I have found. I'm asking for some direction.

I am trying to spawn several children processes and execute a the same
command on each child. I want to reap (read) the results of each of
these children and wait for each to end in their own order. The
command I am executing can potentially sent out a lot of output. I
execute the command using backticks (`) in order to get the output.

The following perl code will work to a certain extent on *NIX platorms

I suspect this is simply because your code hasn't reached the point of
deadlock on those platforms. On my linux system, that point is reached
when the output of any one process exceeds 4096 bytes. So I'll assume
the problem is the same on windows, only the size limit is different...

for( $ichild = $nchildren; $ichild > 0; $ichild-- )
{
pipe RFH, WFH;

if( $pid = fork ) {
print "Forked pid $pid.\n";
open $rfhs{ $pid }, "<&RFH";

I think it would be better just to use lexicals, then use a simple
assignment rather than a dup.
close WFH;
$active++;
} elsif( defined $pid ) {
close RFH;
print WFH `$cmd 2>&1`;
close WFH;
exit;

So here, the child refuses to exit until the print and close has been
completed. Unless the thing to be printed fits into a single OS buffer,
the print will not finish, and it will not exit, until the other side of
the pipe has started reading.

while( $active > 0 ) {
$pid = wait;

And here, the parent refuses to start reading until after the child has
exited. Bam! Deadlock.

If I replace those two lines with:

foreach my $pid(keys %rfhs) {

then things work as they should. (It is not guaranteed to read from
the processes in the order in which they "finish", but that doesn't seem to
matter for the current case.)
$rfh = $rfhs{ $pid };
print "\n$pid completed. It's output is:\n", <$rfh>;
close $rfh;
$active--;
}

Xho
 
T

Tim

I think it would be better just to use lexicals, then use a simple
assignment rather than a dup.

Hi Xho

I have just finished dancing around my cube and singing your praises.
The foreach statement did indeed fix my problem. Regarding your
recommendation to use lexicals for the filehandles... how would the
open statement look like without using the dup? I guess you can now
tell that I'm a rookie on lexicals. I've mostly used global
filehandles.

Once again.. thanks for the help. Tim
 
X

xhoster

Tim said:
Hi Xho

I have just finished dancing around my cube and singing your praises.
The foreach statement did indeed fix my problem.

Great. It is nice to know that at least some of my experience
translates to windows.
Regarding your
recommendation to use lexicals for the filehandles... how would the
open statement look like without using the dup?

pipe my($RFH, $WFH) or die $!;

if( $pid = fork ) {
print "Forked pid $pid.\n";
$rfhs{ $pid } = $RFH;

As a matter of style, I wouldn't really use caps to name the lexicals $RFH
and $WFH, but I wanted to make the fewest changes to your code.

Xho
 
C

comp.llang.perl.moderated

...
The foreach statement did indeed fix my problem.

A possible bit of clarification may help:

The 'wait' is blocking which means the
subsequent 'read' wasn't reached because
the child was stalled until the parent's
read drained the pipe. Xho's solution
reverses the call order of wait and read
so the deadlock was eliminated even if
there was more than a buffer's worth of
write's.

(There's an asynchronous wait available
with POSIX and WNOHANG on Unix. Don't
know if works on Win32 but I suspect not. )
 
T

Tim

Hi again

I appreciate all the help. I hate to resurrect this issue but my
manager re-clarified the requirements of this project and I'm back.
Just to give you an overview of what I'm trying accomplish... I'm
creating a harness for performance testing. A server/master script
will send command requests via TCP to clients/slave daemons. These
slaves will spawn/fork off "n" number of these commands which need run
and finish at their own rate. The output of these commands needs to
get stored in some fashion.

Although Xho's "foreach" solution cured my problem, the reaping of the
pids is performed in a strict order (reading them until they're
finished). I understand the need for this in order to prevent a
deadlock but my ultimate goal to to keep these processes as
asynchronous as possible (using wait?). Charles' comment on the
"asynchronous wait" seems like what I need but this solution needs to
run on Windows.

Here's my question about an alternative... let's say I keep all the
processing of these children in the children and don't pipe anything
back to the parent. Instead I store the results of the children in a
global/shared hash. Can a single hash be operated on by several
children. I apologize if this is a newby question. I'm finding myself
pushing the envelope of my perl knowledge.

P.S. - Hope you'll had a good 4th
 
T

Tim

As I'm thinking about it.. The parent is the only thing tying the
processes together. Aggregating the data needs to go through the
parent. The children wouldn't have any knowledge of a shared
structure. Time for another cup of coffee.
 
X

xhoster

Tim said:
Hi again

I appreciate all the help. I hate to resurrect this issue but my
manager re-clarified the requirements of this project and I'm back.
Just to give you an overview of what I'm trying accomplish... I'm
creating a harness for performance testing.

If you only care about *testing* the performance, I would probably use a
completely different method, depending on what aspect of performance you
are testing. If you are only interested in how long it takes the child
to finish, then forget the pipes. Just have the child exit when it is
done, throwing away any output it would have generated in a non-test
environment.
A server/master script
will send command requests via TCP to clients/slave daemons. These
slaves will spawn/fork off "n" number of these commands which need run
and finish at their own rate.

Whose own rate? One master talks to several daemons, and each daemon
forks several processes. Is it the daemon's own rate, or the process's
own rate, that is required? We've seen code that reflects the process
communicating back to the daemon, but how/what does the daemon communicate
back to the master?
The output of these commands needs to
get stored in some fashion.

Unless the point of the test is to test how long it takes to store the
results, why do the results need to be stored?
Although Xho's "foreach" solution cured my problem, the reaping of the
pids is performed in a strict order (reading them until they're
finished). I understand the need for this in order to prevent a
deadlock but my ultimate goal to to keep these processes as
asynchronous as possible (using wait?). Charles' comment on the
"asynchronous wait" seems like what I need but this solution needs to
run on Windows.

Asynchronous waiting won't solve the deadlock issue. It only lets your
parent program do other things while it is waiting (which will be forever
in a deadlock). But if the parent has nothing to do while it is waiting,
there is no point.

You could use IO::Select (I've never used it on Windows) to determine which
child is ready to be read at any given time. Once you have read a child's
pipe upto the eof, then you can wait for that child (possibly
asynchronously).
Here's my question about an alternative... let's say I keep all the
processing of these children in the children and don't pipe anything
back to the parent. Instead I store the results of the children in a
global/shared hash. Can a single hash be operated on by several
children.

There are modules that allow that to happen (like forks::shared), but I
generally try to stay away from them, especially when I am concerned about
performance.

Xho
 
T

Tim

Hi Xho

I thought you'd be tired of me by now ;) You have good questions. I'm
about to try and answer them within the comments below. In general,
the "harness" I'm in the midst of writing is a way to drive tests on a
remote clients. As you indicated, the clients/slaves will spawn off
child processes which will do the actual work. We want to have the
ability to initiate different commands, shell script, C programs or
whatever on these spawn processes which will then load our Perforce
server. Kinda like Mstone, if you know what that is. If it was just
the test execution "times" on the children my life would be easier.
I'm trying to design the harness to handle the worse case scenario for
tests that have yet been written. which would be handing back the
begin time, the child process output data, and end time. The
discussion continues below...


If you only care about *testing* the performance, I would probably use a
completely different method, depending on what aspect of performance you
are testing. If you are only interested in how long it takes the child
to finish, then forget the pipes. Just have the child exit when it is
done, throwing away any output it would have generated in a non-test
environment.

If I could only wish. Sure, the "times" are my primary interest
however the potential for requiring the output of the children is
inevitable. It looks like I'll need the pipes. It's either that or
creating temp files to store the STDOUT of the child processes and
then picking it up later. I already prototyped the temp file idea on
Windows using the Win32::process::create module and dup'ed STDOUT
filehandles. That would be my last resort
Whose own rate? One master talks to several daemons, and each daemon
forks several processes. Is it the daemon's own rate, or the process's
own rate, that is required? We've seen code that reflects the process
communicating back to the daemon, but how/what does the daemon communicate
back to the master?

The individual rate of each spawn process. The master sends off a TCP
request to the client daemon and leaves the connection open for a
response. The daemon spawns off the children which runs the tests.
Upon completion, the daemon sends a return TCP response to the master
with the status of the children's test (times? data?).
Unless the point of the test is to test how long it takes to store the
results, why do the results need to be stored?

I think this is explained above.
Asynchronous waiting won't solve the deadlock issue. It only lets your
parent program do other things while it is waiting (which will be forever
in a deadlock). But if the parent has nothing to do while it is waiting,
there is no point.

Yeah, I just found this out, The asynchronous wait *is* available for
Windows (they say) but it just hung on me.

You could use IO::Select (I've never used it on Windows) to determine which
child is ready to be read at any given time. Once you have read a child's
pipe upto the eof, then you can wait for that child (possibly
asynchronously).

When I first started this project I "borrowed" a chunk of code from:

http://www.wellho.net/solutions/perl-controlling-multiple-asynchronous-processes-in-perl.html

which utilizes select and signals. It worked great on UNIX but failed
on Windows. It looks like the USR1 signals weren't happening on
Windows. :(

There are modules that allow that to happen (like forks::shared), but I
generally try to stay away from them, especially when I am concerned about
performance.

I need to stick with resident perl modules. We don't want the
requirement of installing additional packages unless it's pure perl
and instantly portable. I'm not asking for miracles at this point. As
a matter of fact, just having the opportunity to tell someone like
yourself where things are at at helped me think this through. If you
have any other ideas I'm ready to listen. Thanks Xho

Tim
 
X

xhoster

Tim said:
If I could only wish. Sure, the "times" are my primary interest
however the potential for requiring the output of the children is
inevitable. It looks like I'll need the pipes. It's either that or
creating temp files to store the STDOUT of the child processes and
then picking it up later.

That would have been my second choice--dump the output into temp files,
then once the stress test is done, go through them at your leisure. If you
were going to return the values into a hash, then you must already have
a way to generate the unique keys to be used in that hash, so using
them as file-names instead should be easy enough. Of course, if the Perl
tool that is currently being developed for stress testing purposes will
eventually be used for other purposes, and the other purposes don't have a
"sort through the files at your leisure" phase, then I can see why you
wouldn't use this as your first choice--you might as well have one code
base for both cases.
When I first started this project I "borrowed" a chunk of code from:

http://www.wellho.net/solutions/perl-controlling-multiple-asynchronous-pr
ocesses-in-perl.html

which utilizes select and signals.

Maybe I just don't understand it, but that code looks awful. They just
loop over handles doing "select" one handle at a time. The point of select
is that you stuff into it all the handles you are interested in, and then
make one call and it will let you know when *any* of these handles is ready
for reading. If they did it that way, there would be no reason to use
USR1 signals, as the USR1 signal is only used to notify the parent "at
least one child just printed, so there is something there to read." When
used correctly, select inherently tells you when there is something to read
(or when the other end of a handle has been closed). But I wouldn't bother
with select anyway, I always use the IO::Select wrapper. There are some
examples in perldoc IO::Select, and I'm sure you can find examples of
its use elsewhere as well.

You would have to change your hash structure to do this--currently you have
pids mapping to file handles, but IO::Select can_read gives you back
handles, not pids.

There are two fundamental ways to handle the select. One is to
say that the child will not print anything until just before it is done,
at which point it will very rapidly print everything it has, then exit.
In this case, once the child's file handle becomes readable, you read
everything on it, then close the handle. The other way is to say that the
child might print slowly, so each time a handle becomes readable, you only
read (sysread, actually) as much as you can without blocking, then come
back later to read more. I'll illustrate only the first, simpler, way:

use strict;
use warnings;
use IO::Select;
my $cmd = qq{perl -le "print q{x}x12"};


my $nchildren = 10;
my %rfhs;

my $s=IO::Select->new();
for( my $ichild = $nchildren; $ichild > 0; $ichild-- )
{
pipe my($RFH, $WFH) or die $!;
my $pid=fork;
if( $pid ) {
print "Forked pid $pid.\n";
$rfhs{ $RFH } = $pid;
$s->add($RFH);
close $WFH or die $!;
} elsif( defined $pid ) {
close $RFH or die $!;
print $WFH `$cmd 2>&1`;
close $WFH or die $!;
exit;
} else {
print "fork failed!\n";
exit 1;
}
}

while ($s->handles) {
foreach my $rfh ( $s->can_read() ) {
my $pid = $rfhs{$rfh};
print "\n$pid completed. It's output is:\n", <$rfh>;
$s->remove($rfh);
close $rfh;
waitpid $pid,0 or die $!;
};
}

Again, I've done this in a way to make as few changes as feasible
to your original code.

Xho
 
T

Tim

Hi Xho

Once again and again and again .. thx
You are indeed a unique individual who for some reason persist in
helping me through this adventure. And yes, it has been an adventure.

I delayed my response because we (my boss and I) wanted to review
where things are with this project. If it wasn't for the Windows
requirement I'd say we'd be in good shape. However Perl on Windows
doesn't seem to operate at the same level of efficiency and
functionality as the *NIX platforms. Go figure... ;)

I ran your latest script which implements the select statement. It
seems to work nicely on OSX and Linux but locks up on Windows. On
Windows, all the children *do* execute completely but the output is
never released to? or read? by the parent. As a matter of fact, the
STDOUT from some of the "Forked pid $pid" statements never get
displayed. Considering there are 10 children getting spawn See below.


C:\Documents and Settings\tbrazil\Desktop>perl xho.pl
Forked pid -1556.
Forked pid -236.
Forked pid -2868.
Forked pid -2296.
Forked pid -568.
Forked pid -1932.
Forked pid -472. <--------- Only 7 children are
shown
Terminating on signal SIGINT(2) <--------- I control-C'ed after the
hand
Attempt to free unreferenced scalar: SV 0x2fd808, Perl interpreter:
0x281028.
Terminating on signal SIGINT(2)
Attempt to free unreferenced scalar: SV 0x2fd808, Perl interpreter:
0x281028.

C:\Documents and Settings\tbrazil\Desktop>perl xho.pl
Forked pid -1528.
Forked pid -1848.
Forked pid -2376.
Forked pid -712.
Forked pid -1076.
Forked pid -2516. <------- only 6 are displayed here
<-------- now we are in "hang city"

The reason why I don't see the full 10 processes is probably to the
buffer not getting flushed. However the parent never displays the
piped "data" info from the children.

We thought we might try to store the results of each children in an
array so that the child processes can end at their own rate and then
collect the process data subsequent to the run. Here's the code.

my $nchildren = 7;
for( $ichild = $nchildren; $ichild > 0; $ichild-- )
{
pipe RFH, WFH;

if( $pid = fork )
{
#
# Only executed in the parent.
#
print "Forked pid $pid.\n";
open $rfhs{ $pid }, "<&RFH";
close WFH;
}

elsif( defined $pid )
{
#
# Only executed in the children.
#
close RFH;
@output = `$cmd 2>&1`; # here's the difference
print WFH @output; # sorry I didn't add the
lexical FH's yet
print WFH "Exit status is ", $? >> 8, ".\n";
close WFH;
exit;
}

else
{
print "fork failed!\n";
exit 1;
}
}

close RFH;
foreach $pid ( keys %rfhs )
{
$rfh = $rfhs{ $pid };
print "\n$pid completed. It's output is:\n", <$rfh>;
close $rfh;
waitpid $pid,0;
}

Ultimately I end up with the same problem I started with. It can't
handle a large amount of data without locking up. BTW, I discovered
that you can only spawn 66 forks on Windows before it fails to fork.
It must be a limitation. It consistently fails after 66.

In summary, I wanted to let you know where thing stand with this
email. I do not expect any more help from you however I'm always up
for any last pointers. At this point (actually a few points back) you
have gone above and beyond. As a side note, I am a docent on Alcatraz
every other weekend. If you are ever in SF area of CA you should look
me up and I'll give you a tour. I need to figure out how to give you
my phone number without posting it here. You can always call Perforce
and ask for Tim Brazil.

Thx - Tim
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,576
Members
45,054
Latest member
LucyCarper

Latest Threads

Top