Reassign or discard Popen().stdout from a server process

Discussion in 'Python' started by John O'Hagan, Feb 1, 2011.

  1. John O'Hagan

    John O'Hagan Guest

    I'm starting a server process as a subprocess. Startup is slow and
    unpredictable (around 3-10 sec), so I'm reading from its stdout until I get a
    line that tells me it's ready before proceeding, in simplified form:

    import subprocess
    proc = subprocess.Popen(['server', 'args'], stdout=subprocess.PIPE)
    while proc.stdout.readline() != "Ready.\n":
    pass

    Now I can start communicating with the server, but I eventually realised that
    as I'm no longer reading stdout, the pipe buffer will fill up with output from
    the server and before long it blocks and the server stops working.

    I can't keep reading because that will block - there won't be any more output
    until I send some input, and I don't want it in any case.

    To try to fix this I added:

    proc.stdout = os.path.devnull

    which has the effect of stopping the server from failing, but I'm not convinced
    it's doing what I think it is. If I replace devnull in the above line with a
    real file, it stays empty although I know there is more output, which makes me
    think it hasn't really worked.

    Simply closing stdout also seems to stop the crashes, but doesn't that mean
    it's still being written to, but the writes are just silently failing? In
    either case I'm wary of more elusive bugs arising from misdirected stdout.

    Is it possible to re-assign the stdout of a subprocess after it has started?
    Or just close it? What's the right way to read stdout up to a given line, then
    discard the rest?

    Thanks,

    john
    John O'Hagan, Feb 1, 2011
    #1
    1. Advertising

  2. John O'Hagan

    Nobody Guest

    On Tue, 01 Feb 2011 08:30:19 +0000, John O'Hagan wrote:

    > I can't keep reading because that will block - there won't be any more
    > output until I send some input, and I don't want it in any case.
    >
    > To try to fix this I added:
    >
    > proc.stdout = os.path.devnull
    >
    > which has the effect of stopping the server from failing, but I'm not
    > convinced it's doing what I think it is.


    It isn't. os.path.devnull is a string, not a file. But even if you did:

    proc.stdout = open(os.path.devnull, 'w')

    that still wouldn't work.

    > If I replace devnull in the above line with a real file, it stays empty
    > although I know there is more output, which makes me think it hasn't
    > really worked.


    It hasn't.

    > Simply closing stdout also seems to stop the crashes, but doesn't that mean
    > it's still being written to, but the writes are just silently failing? In
    > either case I'm wary of more elusive bugs arising from misdirected stdout.


    If you close proc.stdout, the next time the server writes to its stdout,
    it will receive SIGPIPE or, if it catches that, the write will fail with
    EPIPE (write on pipe with no readers). It's up to the server how it deals
    with that.

    > Is it possible to re-assign the stdout of a subprocess after it has started?


    No.

    > Or just close it? What's the right way to read stdout up to a given
    > line, then discard the rest?


    If the server can handle the pipe being closed, go with that. Otherwise,
    options include redirecting stdout to a file and running "tail -f" on the
    file from within Python, or starting a thread or process whose sole
    function is to read and discard the server's output.
    Nobody, Feb 3, 2011
    #2
    1. Advertising

  3. John O'Hagan

    John O'Hagan Guest

    On Thu, 3 Feb 2011, Nobody wrote:
    > On Tue, 01 Feb 2011 08:30:19 +0000, John O'Hagan wrote:
    > > I can't keep reading because that will block - there won't be any more
    > > output until I send some input, and I don't want it in any case.
    > >
    > > To try to fix this I added:
    > >
    > > proc.stdout = os.path.devnull
    > >
    > > which has the effect of stopping the server from failing, but I'm not
    > > convinced it's doing what I think it is.

    >
    > It isn't. os.path.devnull is a string, not a file. But even if you did:
    >
    > proc.stdout = open(os.path.devnull, 'w')
    >
    > that still wouldn't work.


    As mentioned earlier in the thread, I did in fact use open(), this was a typo,
    [...]
    > > Is it possible to re-assign the stdout of a subprocess after it has
    > > started?

    >
    > No.
    >
    > > Or just close it? What's the right way to read stdout up to a given
    > > line, then discard the rest?

    >
    > If the server can handle the pipe being closed, go with that. Otherwise,
    > options include redirecting stdout to a file and running "tail -f" on the
    > file from within Python, or starting a thread or process whose sole
    > function is to read and discard the server's output.


    Thanks, that's all clear now.

    But I'm still a little curious as to why even unsuccessfully attempting to
    reassign stdout seems to stop the pipe buffer from filling up.

    John
    John O'Hagan, Feb 4, 2011
    #3
  4. John O'Hagan

    Nobody Guest

    On Fri, 04 Feb 2011 15:48:55 +0000, John O'Hagan wrote:

    > But I'm still a little curious as to why even unsuccessfully attempting to
    > reassign stdout seems to stop the pipe buffer from filling up.


    It doesn't. If the server continues to run, then it's ignoring/handling
    both SIGPIPE and the EPIPE error. Either that, or another process has the
    read end of the pipe open (so no SIGPIPE/EPIPE), and the server is using
    non-blocking I/O or select() so that it doesn't block writing its
    diagnostic messages.
    Nobody, Feb 9, 2011
    #4
  5. John O'Hagan

    John O'Hagan Guest

    On Wed, 9 Feb 2011, Nobody wrote:
    > On Fri, 04 Feb 2011 15:48:55 +0000, John O'Hagan wrote:
    > > But I'm still a little curious as to why even unsuccessfully attempting
    > > to reassign stdout seems to stop the pipe buffer from filling up.

    >
    > It doesn't. If the server continues to run, then it's ignoring/handling
    > both SIGPIPE and the EPIPE error. Either that, or another process has the
    > read end of the pipe open (so no SIGPIPE/EPIPE), and the server is using
    > non-blocking I/O or select() so that it doesn't block writing its
    > diagnostic messages.


    The server fails with stdout=PIPE if I don't keep reading it, but doesn't fail
    if I do stdout=anything (I've tried files, strings, integers, and None) soon
    after starting the process, without any other changes. How is that consistent
    with either of the above conditions? I'm sure you're right, I just don't
    understand.

    Regards,

    John
    John O'Hagan, Feb 10, 2011
    #5
  6. John O'Hagan

    Nobody Guest

    On Thu, 10 Feb 2011 08:35:24 +0000, John O'Hagan wrote:

    >> > But I'm still a little curious as to why even unsuccessfully attempting
    >> > to reassign stdout seems to stop the pipe buffer from filling up.

    >>
    >> It doesn't. If the server continues to run, then it's ignoring/handling
    >> both SIGPIPE and the EPIPE error. Either that, or another process has the
    >> read end of the pipe open (so no SIGPIPE/EPIPE), and the server is using
    >> non-blocking I/O or select() so that it doesn't block writing its
    >> diagnostic messages.

    >
    > The server fails with stdout=PIPE if I don't keep reading it, but
    > doesn't fail if I do stdout=anything (I've tried files, strings,
    > integers, and None) soon after starting the process, without any other
    > changes. How is that consistent with either of the above conditions? I'm
    > sure you're right, I just don't understand.


    What do you mean by "fail". I wouldn't be surprised if it hung, due to the
    write() on stdout blocking. If you reassign the .stdout member, the
    existing file object is likely to become unreferenced, get garbage
    collected, and close the pipe, which would prevent the server from
    blocking (the write() will fail rather than blocking).

    If the server puts the pipe into non-blocking mode, write() will fail with
    EAGAIN if you don't read it but with EPIPE if you close the pipe. The
    server may handle these cases differently.
    Nobody, Feb 11, 2011
    #6
  7. John O'Hagan

    John O'Hagan Guest

    On Fri, 11 Feb 2011, Nobody wrote:
    > On Thu, 10 Feb 2011 08:35:24 +0000, John O'Hagan wrote:
    > >> > But I'm still a little curious as to why even unsuccessfully
    > >> > attempting to reassign stdout seems to stop the pipe buffer from
    > >> > filling up.
    > >>
    > >> It doesn't. If the server continues to run, then it's ignoring/handling
    > >> both SIGPIPE and the EPIPE error. Either that, or another process has
    > >> the read end of the pipe open (so no SIGPIPE/EPIPE), and the server is
    > >> using non-blocking I/O or select() so that it doesn't block writing its
    > >> diagnostic messages.

    > >
    > > The server fails with stdout=PIPE if I don't keep reading it, but
    > > doesn't fail if I do stdout=anything (I've tried files, strings,
    > > integers, and None) soon after starting the process, without any other
    > > changes. How is that consistent with either of the above conditions? I'm
    > > sure you're right, I just don't understand.

    >
    > What do you mean by "fail". I wouldn't be surprised if it hung, due to the
    > write() on stdout blocking. If you reassign the .stdout member, the
    > existing file object is likely to become unreferenced, get garbage
    > collected, and close the pipe, which would prevent the server from
    > blocking (the write() will fail rather than blocking).
    >
    > If the server puts the pipe into non-blocking mode, write() will fail with
    > EAGAIN if you don't read it but with EPIPE if you close the pipe. The
    > server may handle these cases differently.


    By "fail" I mean the server, which is the Fluidsynth soundfont rendering
    program, stops producing sound in a way consistent with the blocked write() as
    you describe. It continues to read stdin; in fact, Ctrl+C-ing out of the block
    produces all the queued sounds at once.

    What I didn't realise was that the (ineffective) reassignment of stdout has the
    side-effect of closing it by dereferencing it, as you explain above. I asked on
    the Fluidsynth list and currently it ignores if the pipe it's writing to has
    been closed . All makes sense now, thanks.


    John
    John O'Hagan, Feb 12, 2011
    #7
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. =?Utf-8?B?VGltOjouLg==?=

    Reassign value to private

    =?Utf-8?B?VGltOjouLg==?=, Jan 27, 2005, in forum: ASP .Net
    Replies:
    2
    Views:
    368
    =?Utf-8?B?VGltOjouLg==?=
    Jan 27, 2005
  2. biswaranjan.rath

    how to reassign variable

    biswaranjan.rath, May 5, 2006, in forum: XML
    Replies:
    3
    Views:
    7,683
    biswaranjan.rath
    May 5, 2006
  3. Shailesh Humbad

    reassign keys in an STL map

    Shailesh Humbad, Oct 29, 2004, in forum: C++
    Replies:
    7
    Views:
    2,544
    Greg Schmidt
    Nov 2, 2004
  4. Greg Ercolano
    Replies:
    2
    Views:
    2,857
  5. Bernhard Merkle

    reassign to builtin possible !?

    Bernhard Merkle, Jan 3, 2008, in forum: Python
    Replies:
    8
    Views:
    286
    Bernhard Merkle
    Jan 4, 2008
Loading...

Share This Page