Working around buffering issues when writing to pipes

S

sven _

Keywords: subprocess stdout stderr unbuffered pty tty pexpect flush setvbuf

I'm trying to find a solution to <URL:http://bugs.python.org/issue1241>. In
short: unless specifically told not to, normal C stdio will use full output
buffering when connected to a pipe. It will use default (typically
unbuffered) output when connected to a tty/pty.

This is why subprocess.Popen() won't work with the following program when
stdout and stderr are pipes:

#include <stdio.h>
#include <unistd.h>
int main()
{
int i;
for(i = 0; i < 5; i++)
{
printf("stdout ding %d\n", i);
fprintf(stderr, "stderr ding %d\n", i);
sleep(1);
}
return 0;
}

Then

subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)

is read using polling, but the pipes don't return any data until the
program is done. As expected, specifying a bufsize of 0 or 1 in the Popen()
call has no effect. See end of mail for example Python scripts.

Unfortunately, changing the child program to flush stdout/err or specifying
setvbuf(stdout,0,_IONBF,0); is an undesired workaround in this case.

I went with pexpect and it works well, except that I must handle stdout and
stderr separately. There seems to be no way to do this with pexpect, or am
I mistaken?

Are there alternative ways of solving this? Perhaps some way of

I'm on Linux, and portability to other platforms is not a requirement.

s


#Test script using subprocess:
import subprocess
cmd = '/tmp/slow' # The C program shown above
print 'Starting...'
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
while True:
lineout = proc.stdout.readline()
lineerr = proc.stderr.readline()
exitcode = proc.poll()
if (not lineout and not lineerr) and exitcode is not None: break
if not lineout == '': print lineout.strip()
if not lineerr == '': print 'ERR: ' + lineerr.strip()
print 'Done.'


#Test script using pexpect, merging stdout and stderr:
import sys
import pexpect
cmd = '/home/sveniu/dev/logfetch/bin/slow'
print 'Starting...'
child = pexpect.spawn(cmd, timeout=100, maxread=1, logfile=sys.stdout)
try:
while True: child.expect('\n')
except pexpect.EOF: pass
print 'Done.'
 
M

Mark Wooding

sven _ said:
In short: unless specifically told not to, normal C stdio will use
full output buffering when connected to a pipe. It will use default
(typically unbuffered) output when connected to a tty/pty.

Wrong. Standard output to a terminal is typically line-buffered.
(Standard error is never buffered by default, per the ISO C standard.)
This is why subprocess.Popen() won't work with the following program
when stdout and stderr are pipes:

Yes, obviously.
I went with pexpect and it works well, except that I must handle
stdout and stderr separately. There seems to be no way to do this with
pexpect, or am I mistaken?

You could do it by changing the command you pass to pexpect.spawn so
that it redirects its stderr to (say) a pipe. Doing this is messy,
though -- you'd probably need to set the pipe's write-end fd in an
environment variable or something.

It's probably better to use os.pipe and pty.fork.

As mentioned above, the inferior process's output will still be
line-buffered unless it does something special to change this.

-- [mdw]
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,057
Latest member
KetoBeezACVGummies

Latest Threads

Top