Simple question about Queue.Queue and threads

Discussion in 'Python' started by Frank Millman, Feb 5, 2010.

  1. Hi all

    Assume you have a server process running, a pool of worker threads to
    perform tasks, and a Queue.Queue() to pass the tasks to the workers.

    In order to shut down the server cleanly, you want to ensure that the
    workers have all finished their tasks. I like the technique of putting a
    None onto the queue, and have each worker check for None, put None back onto
    the queue, and terminate itself.

    The main program would look something like this -

    q.put(None)
    for worker in worker_threads:
    worker.join()

    At this point you can be sure that each thread has completed its tasks and
    terminated itself.

    However, the queue is not empty - it still has the final None in it.

    Is it advisable to finalise the cleanup like this? -

    while not q.empty():
    q.get()
    q.task_done()
    q.join()

    Or is this completely redundant?

    Thanks

    Frank Millman
    Frank Millman, Feb 5, 2010
    #1
    1. Advertising

  2. Frank Millman

    Steven Guest

    On Feb 5, 7:45 am, "Frank Millman" <> wrote:
    > Hi all
    >
    > Assume you have a server process running, a pool of worker threads to
    > perform tasks, and aQueue.Queue() to pass the tasks to the workers.
    >
    > In order to shut down the server cleanly, you want to ensure that the
    > workers have all finished their tasks. I like the technique of putting a
    > None onto thequeue, and have each worker check for None, put None back onto
    > thequeue, and terminate itself.
    >
    > The main program would look something like this -
    >
    >     q.put(None)
    >     for worker in worker_threads:
    >         worker.join()
    >
    > At this point you can be sure that each thread has completed its tasks and
    > terminated itself.
    >
    > However, thequeueis not empty - it still has the final None in it.
    >
    > Is it advisable to finalise the cleanup like this? -
    >
    >     while not q.empty():
    >         q.get()
    >         q.task_done()
    >     q.join()
    >
    > Or is this completely redundant?
    >
    > Thanks
    >
    > Frank Millman


    Queue objects have support for this signaling baked in with
    q.task_done and q.join.

    After the server process has put all tasks into the queue, it can join
    the queue itself, not the worker threads.

    q.join()

    This will block until all tasks have been gotten AND completed. The
    worker threads would simply do this:
    task_data = q.get()
    do_task(task_data)
    q.task_done()

    Using pairs of get and task_done you no longer need to send a signal.
    Just exit the server process and the worker threads will die (assuming
    of course, you set .setDaemon(True) before starting each worker
    thread).

    Steven Rumbalski
    Steven, Feb 8, 2010
    #2
    1. Advertising

  3. On Mon, 8 Feb 2010 06:51:02 -0800 (PST), Steven
    <> declaimed the following in
    gmane.comp.python.general:


    >
    > Queue objects have support for this signaling baked in with
    > q.task_done and q.join.
    >

    Only in Python 2.5 and later (and I, for one, only upgraded to 2.5
    last summer; suspect 2.6 will be chosen later this year). Passing a
    unique sentinel works in all versions supporting Queue. And if a second
    Queue is used for returns from the threads, each expiring thread could
    return its ID for use in a .join() operation -- whereas using the
    q.join() method blocks the collector of the return data until all the
    tasks are done.

    --
    Wulfraed Dennis Lee Bieber KD6MOG
    HTTP://wlfraed.home.netcom.com/
    Dennis Lee Bieber, Feb 8, 2010
    #3
  4. On Feb 8, 4:51 pm, Steven <> wrote:
    >
    > Queue objects have support for this signaling baked in with
    > q.task_done and q.join.
    >
    > After the server process has put all tasks into the queue, it can join
    > the queue itself, not the worker threads.
    >
    > q.join()
    >
    > This will block until all tasks have been gotten AND completed. The
    > worker threads would simply do this:
    > task_data = q.get()
    > do_task(task_data)
    > q.task_done()
    >
    > Using pairs of get and task_done you no longer need to send a signal.
    > Just exit the server process and the worker threads will die (assuming
    > of course, you set .setDaemon(True) before starting each worker
    > thread).
    >


    Thanks, Steven.

    This works perfectly in my scenario, and tidies up the code a bit.

    Minor point - according to the 2.6 docs, .setDaemon(True) is the old API -
    the current way of specifying this is .daemon = True.

    Thanks for the tip.

    Frank
    Frank Millman, Feb 9, 2010
    #4
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Paul L. Du Bois

    Queue.Queue-like class without the busy-wait

    Paul L. Du Bois, Mar 24, 2005, in forum: Python
    Replies:
    29
    Views:
    1,036
    Antoon Pardon
    Apr 4, 2005
  2. Russell Warren

    Is Queue.Queue.queue.clear() thread-safe?

    Russell Warren, Jun 22, 2006, in forum: Python
    Replies:
    4
    Views:
    659
    Russell Warren
    Jun 27, 2006
  3. Kceiw
    Replies:
    3
    Views:
    979
    Jim Langston
    Mar 14, 2006
  4. Kris
    Replies:
    0
    Views:
    461
  5. bintom
    Replies:
    6
    Views:
    603
    Öö Tiib
    Nov 3, 2012
Loading...

Share This Page