Dialog with a process via subprocess.Popen blocks forever

Discussion in 'Python' started by bayer.justin@googlemail.com, Feb 28, 2007.

  1. Guest

    Hi,

    I am trying to communicate with a subprocess via the subprocess
    module. Consider the following example:

    >>> from subprocess import Popen, PIPE
    >>> Popen("""python -c 'input("hey")'""", shell=True)

    <subprocess.Popen object at 0x729f0>
    >>> hey


    Here hey is immediately print to stdout of my interpreter, I did not
    type in the "hey". But I want to read from the output into a string,
    so I do

    >>> x = Popen("""python -c 'input("hey\n")'""", shell=True, stdout=PIPE, bufsize=2**10)
    >>> x.stdout.read(1)

    # blocks forever

    Is it possible to read to and write to the std streams of a
    subprocess? What am I doing wrong?

    Regards,
    -Justin
    , Feb 28, 2007
    #1
    1. Advertising

  2. <> wrote:


    > Is it possible to read to and write to the std streams of a
    > subprocess? What am I doing wrong?


    I think this problem lies deeper - there has been a lot of
    complaints about blocking and data getting stuck in pipes
    and sockets...

    I have noticed that the Python file objects seem to be
    inherently half duplex, but I am not sure if it is python
    or the underlying OS. (Suse 10 in my case)

    You can fix it by unblocking using the fcntl module,
    but then all your accesses have to be in try - except
    clauses.

    It may be worth making some sort of FAQ on this
    subject, as it appears from time to time.

    The standard advice has been to use file.flush()
    after file.write(), but if you are threading and
    have called file.read(n), then the flushing does
    not help - this is why I say that the file object
    seems to be inherently half duplex.

    It makes perfect sense, of course, if the file is a
    real disk file, as you have to finish the read before
    you can move the heads to do the write- but for
    pipes, sockets and RS-232 serial lines it does not
    make so much sense.

    Does anybody know where it comes from -
    Python, the various OSses, or C?

    - Hendrik
    Hendrik van Rooyen, Mar 1, 2007
    #2
    1. Advertising

  3. Guest

    Hi,

    Thanks for your answer. I had a look into the fcntl module and tried
    to unlock the output-file, but

    >>> fcntl.lockf(x.stdout, fcntl.LOCK_UN)

    Traceback (most recent call last):
    File "<stdin>", line 1, in <module>
    IOError: [Errno 9] Bad file descriptor

    I wonder why it does work with the sys.stdin It's really a pity, it's
    the first time python does not work as expected. =/

    Flushing the stdin did not help, too.

    Regards,
    -Justin
    , Mar 1, 2007
    #3
  4. En Wed, 28 Feb 2007 18:27:43 -0300, <> escribió:

    > Hi,
    >
    > I am trying to communicate with a subprocess via the subprocess
    > module. Consider the following example:
    >
    >>>> from subprocess import Popen, PIPE
    >>>> Popen("""python -c 'input("hey")'""", shell=True)

    > <subprocess.Popen object at 0x729f0>
    >>>> hey

    >
    > Here hey is immediately print to stdout of my interpreter, I did not
    > type in the "hey". But I want to read from the output into a string,
    > so I do
    >
    >>>> x = Popen("""python -c 'input("hey\n")'""", shell=True, stdout=PIPE,
    >>>> bufsize=2**10)
    >>>> x.stdout.read(1)

    > # blocks forever

    Blocks, or is the child process waiting for you to input something in
    response?

    > Is it possible to read to and write to the std streams of a
    > subprocess? What am I doing wrong?


    This works for me on Windows XP. Note that I'm using a tuple with
    arguments, and raw_input instead of input (just to avoid a traceback on
    stderr)

    py> x=Popen(("python", "-c", "raw_input('hey')"), shell=True, stdout=PIPE)
    py> x.stdout.read(1)
    1234
    'h'
    py> x.stdout.read()
    'ey'

    I typed that 1234 (response to raw_input).

    You may need to use python -u, or redirect stderr too, but what your real
    problem is?

    --
    Gabriel Genellina
    Gabriel Genellina, Mar 1, 2007
    #4
  5. Guest

    Okay, here is what I want to do:

    I have a C Program that I have the source for and want to hook with
    python into that. What I want to do is: run the C program as a
    subprocess.
    The C programm gets its "commands" from its stdin and sends its state
    to stdout. Thus I have some kind of dialog over stdin.

    So, once I start the C Program from the shell, I immediately get its
    output in my terminal. If I start it from a subprocess in python and
    use python's sys.stdin/sys.stdout as the subprocess' stdout/stdin I
    also get it immediately.

    BUT If I use PIPE for both (so I can .write() on the stdin and .read()
    from the subprocess' stdout stream (better: file descriptor)) reading
    from the subprocess stdout blocks forever. If I write something onto
    the subprocess' stdin that causes it to somehow proceed, I can read
    from its stdout.

    Thus a useful dialogue is not possible.

    Regards,
    -Justin
    , Mar 1, 2007
    #5
  6. wrote:
    > Okay, here is what I want to do:
    >
    > I have a C Program that I have the source for and want to hook with
    > python into that. What I want to do is: run the C program as a
    > subprocess.
    > The C programm gets its "commands" from its stdin and sends its state
    > to stdout. Thus I have some kind of dialog over stdin.
    >
    > So, once I start the C Program from the shell, I immediately get its
    > output in my terminal. If I start it from a subprocess in python and
    > use python's sys.stdin/sys.stdout as the subprocess' stdout/stdin I
    > also get it immediately.
    >
    > BUT If I use PIPE for both (so I can .write() on the stdin and .read()
    > from the subprocess' stdout stream (better: file descriptor)) reading
    > from the subprocess stdout blocks forever. If I write something onto
    > the subprocess' stdin that causes it to somehow proceed, I can read
    > from its stdout.
    >
    > Thus a useful dialogue is not possible.
    >
    > Regards,
    > -Justin
    >
    >
    >

    Have you considered using pexpect: http://pexpect.sourceforge.net/ ?

    George
    George Trojan, Mar 1, 2007
    #6
  7. En Thu, 01 Mar 2007 14:42:00 -0300, <> escribió:

    > BUT If I use PIPE for both (so I can .write() on the stdin and .read()
    > from the subprocess' stdout stream (better: file descriptor)) reading
    > from the subprocess stdout blocks forever. If I write something onto
    > the subprocess' stdin that causes it to somehow proceed, I can read
    > from its stdout.


    On http://docs.python.org/lib/popen2-flow-control.html there are some
    notes on possible flow control problems you may encounter.
    If you have no control over the child process, it may be safer to use a
    different thread for reading its output.

    --
    Gabriel Genellina
    Gabriel Genellina, Mar 1, 2007
    #7
  8. <> wrote:



    > Hi,
    >
    > Thanks for your answer. I had a look into the fcntl module and tried
    > to unlock the output-file, but
    >
    > >>> fcntl.lockf(x.stdout, fcntl.LOCK_UN)

    > Traceback (most recent call last):
    > File "<stdin>", line 1, in <module>
    > IOError: [Errno 9] Bad file descriptor
    >
    > I wonder why it does work with the sys.stdin It's really a pity, it's
    > the first time python does not work as expected. =/
    >
    > Flushing the stdin did not help, too.


    its block, not lock, and one uses file.flush() after using file.write(),
    so the stdin is the wrong side - you have to push, you can't pull..

    Here is the unblock function I use - it comes from the internet,
    possibly from this group, but I have forgotten who wrote it.

    # Some magic to make a file non blocking - from the internet

    def unblock(f):
    """Given file 'f', sets its unblock flag to true."""

    fcntl.fcntl(f.fileno(), fcntl.F_SETFL, os.O_NONBLOCK)

    hope this helps - note that the f is not the file's name but the
    thing you get when you write :

    f = open(...

    - Hendrik
    Hendrik van Rooyen, Mar 2, 2007
    #8
  9. <> wrote:

    8<------------------
    > The C programm gets its "commands" from its stdin and sends its state
    > to stdout. Thus I have some kind of dialog over stdin.
    >
    > So, once I start the C Program from the shell, I immediately get its
    > output in my terminal. If I start it from a subprocess in python and
    > use python's sys.stdin/sys.stdout as the subprocess' stdout/stdin I
    > also get it immediately.


    so why don't you just write to your stdout and read from your stdin?

    >
    > BUT If I use PIPE for both (so I can .write() on the stdin and .read()


    This confuses me - I assume you mean write to the c program's stdin?

    > from the subprocess' stdout stream (better: file descriptor)) reading
    > from the subprocess stdout blocks forever. If I write something onto
    > the subprocess' stdin that causes it to somehow proceed, I can read
    > from its stdout.


    This sounds like the c program is getting stuck waiting for input...

    >
    > Thus a useful dialogue is not possible.
    >


    If you are both waiting for input, you have a Mexican standoff...

    And if you are using threads, and you have issued a .read() on
    a file, then a .write() to the same file, even followed by a .flush()
    will not complete until after the completion of the .read().

    So in such a case you have to unblock the file, and do the .read() in
    a try - except clause, to "free up" the "file driver" so that the .write()
    can complete.

    But I am not sure if this is in fact your problem, or if it is just normal
    synchronisation hassles...

    - Hendrik
    Hendrik van Rooyen, Mar 2, 2007
    #9
  10. Guest

    > If you are both waiting for input, you have a Mexican standoff...

    That is not the problem. The problem is, that the buffers are not
    flushed correctly. It's a dialogue, so nothing complicated. But python
    does not get what the subprocess sends onto the subprocess' standard
    out - not every time, anyway.

    I'm quite confused, but hopefully will understand what's going on and
    come back here.
    , Mar 2, 2007
    #10
  11. writes:

    > So, once I start the C Program from the shell, I immediately get its
    > output in my terminal. If I start it from a subprocess in python and
    > use python's sys.stdin/sys.stdout as the subprocess' stdout/stdin I
    > also get it immediately.


    If stdout is connected to a terminal, it's usually line buffered, so the
    buffer is flushed whenever a newline is written.

    > BUT If I use PIPE for both (so I can .write() on the stdin and .read()
    > from the subprocess' stdout stream (better: file descriptor)) reading
    > from the subprocess stdout blocks forever. If I write something onto
    > the subprocess' stdin that causes it to somehow proceed, I can read
    > from its stdout.


    When stdout is not connected to a terminal, it's usually fully buffered,
    so that nothing is actually written to the file until the buffer
    overflows or until it's explictly flushed.

    If you can modify the C program, you could force its stdout stream to be
    line buffered. Alternatively, you could call fflush on stdout whenever
    you're about to read from stdin. If you can't modify the C program you
    may have to resort to e.g. pseudo ttys to trick it into believing that
    its stdout is a terminal.

    Bernhard

    --
    Intevation GmbH http://intevation.de/
    Skencil http://skencil.org/
    Thuban http://thuban.intevation.org/
    Bernhard Herzog, Mar 2, 2007
    #11
  12. Donn Cave Guest

    In article <>,
    "Gabriel Genellina" <> wrote:

    > En Thu, 01 Mar 2007 14:42:00 -0300, <> escribió:
    >
    > > BUT If I use PIPE for both (so I can .write() on the stdin and .read()
    > > from the subprocess' stdout stream (better: file descriptor)) reading
    > > from the subprocess stdout blocks forever. If I write something onto
    > > the subprocess' stdin that causes it to somehow proceed, I can read
    > > from its stdout.

    >
    > On http://docs.python.org/lib/popen2-flow-control.html there are some
    > notes on possible flow control problems you may encounter.


    It's a nice summary of one problem, a deadlock due to full pipe
    buffer when reading from two pipes. The proposed simple solution
    depends too much on the cooperation of the child process to be
    very interesting, though. The good news is that there is a real
    solution and it isn't terribly complex, you just have to use select()
    and UNIX file descriptor I/O. The bad news is that while this is
    a real problem, it isn't the one commonly encountered by first
    time users of popen.

    The more common problem, where you're trying to have a dialogue
    over pipes with a program that wasn't written specifically to
    support that, is not solvable per se - I mean, you have to use
    another device (pty) or redesign the application.

    > If you have no control over the child process, it may be safer to use a
    > different thread for reading its output.


    Right - `I used threads to solve my problem, and now I have two
    problems.' It can work for some variations on this problem, but
    not the majority of them.

    Donn Cave,
    Donn Cave, Mar 2, 2007
    #12
  13. En Fri, 02 Mar 2007 14:38:59 -0300, Donn Cave <>
    escribió:

    > In article <>,
    > "Gabriel Genellina" <> wrote:
    >
    >> On http://docs.python.org/lib/popen2-flow-control.html there are some
    >> notes on possible flow control problems you may encounter.

    >
    > It's a nice summary of one problem, a deadlock due to full pipe
    > buffer when reading from two pipes. The proposed simple solution
    > depends too much on the cooperation of the child process to be
    > very interesting, though. The good news is that there is a real
    > solution and it isn't terribly complex, you just have to use select()
    > and UNIX file descriptor I/O. The bad news is that while this is
    > a real problem, it isn't the one commonly encountered by first
    > time users of popen.


    More bad news: you can't use select() with file handles on Windows.

    >> If you have no control over the child process, it may be safer to use a
    >> different thread for reading its output.

    >
    > Right - `I used threads to solve my problem, and now I have two
    > problems.' It can work for some variations on this problem, but
    > not the majority of them.


    Any pointers on what kind of problems may happen, and usual solutions for
    them?
    On Windows one could use asynchronous I/O, or I/O completion ports, but
    neither of these are available directly from Python. So using a separate
    thread for reading may be the only solution, and I can't see why is it so
    bad. (Apart from buffering on the child process, which you can't control
    anyway).

    --
    Gabriel Genellina
    Gabriel Genellina, Mar 6, 2007
    #13
  14. Donn Cave Guest

    In article <>,
    "Gabriel Genellina" <> wrote:

    > En Fri, 02 Mar 2007 14:38:59 -0300, Donn Cave <>
    > escribió:
    >
    > > In article <>,
    > > "Gabriel Genellina" <> wrote:
    > >
    > >> On http://docs.python.org/lib/popen2-flow-control.html there are some
    > >> notes on possible flow control problems you may encounter.

    > >
    > > It's a nice summary of one problem, a deadlock due to full pipe
    > > buffer when reading from two pipes. The proposed simple solution
    > > depends too much on the cooperation of the child process to be
    > > very interesting, though. The good news is that there is a real
    > > solution and it isn't terribly complex, you just have to use select()
    > > and UNIX file descriptor I/O. The bad news is that while this is
    > > a real problem, it isn't the one commonly encountered by first
    > > time users of popen.

    >
    > More bad news: you can't use select() with file handles on Windows.


    Bad news about UNIX I/O on Microsoft Windows is not really news.
    I am sure I have heard of some event handling function analogous
    to select, but don't know if it's a practical solution here.

    > >> If you have no control over the child process, it may be safer to use a
    > >> different thread for reading its output.

    > >
    > > Right - `I used threads to solve my problem, and now I have two
    > > problems.' It can work for some variations on this problem, but
    > > not the majority of them.

    >
    > Any pointers on what kind of problems may happen, and usual solutions for
    > them?
    > On Windows one could use asynchronous I/O, or I/O completion ports, but
    > neither of these are available directly from Python. So using a separate
    > thread for reading may be the only solution, and I can't see why is it so
    > bad. (Apart from buffering on the child process, which you can't control
    > anyway).


    I wouldn't care to get into an extensive discussion of the general
    merits and pitfalls of threads. Other than that ... let's look at
    the problem:

    - I am waiting for child process buffered output
    - I have no control over the child process

    Therefore I spawn a thread to do this waiting, so the parent thread
    can continue about its business. But assuming that its business
    eventually does involve this dialogue with the child process, it
    seems that I have not resolved that problem at all, I've only added
    to it. I still have no way to get the output.

    Now if you want to use threads because you're trying to use Microsoft
    Windows as some sort of a half-assed UNIX, that's a different issue
    and I wouldn't have any idea what's best.

    Donn Cave,
    Donn Cave, Mar 6, 2007
    #14
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. grayaii
    Replies:
    4
    Views:
    324
    Christian Heimes
    May 7, 2008
  2. John Smith

    Process.waitFor() blocks forever

    John Smith, Oct 8, 2010, in forum: Java
    Replies:
    4
    Views:
    3,310
    Daniel Pitts
    Oct 12, 2010
  3. Nathaniel Talbott

    Ruby blocks... forever

    Nathaniel Talbott, Dec 8, 2003, in forum: Ruby
    Replies:
    11
    Views:
    216
    Nathaniel Talbott
    Dec 8, 2003
  4. matt
    Replies:
    1
    Views:
    238
    George Ogata
    Aug 6, 2004
  5. Mark
    Replies:
    7
    Views:
    138
Loading...

Share This Page