Multiprocessing.Pipe in a daemon

F

Falcolas

So, I'm running into a somewhat crazy bug.

I am running several workers using multiprocessing to handle gearman
requests. I'm using pipes to funnel log messages from each of the
workers back to the controlling process, which iterates over the other
end of the pipe, looking for messages with poll(), and logging them
using the parent's log handler.

This works really well when I'm not daemonizing the entire thing.
However, when I daemonize the process (which happens well prior to any
setup of the pipes & multiprocess.Process'es), a pipe which has
nothing in it return True for the poll(), and then blocks on the
pipe.recv() command. The gearman workers are still operational and
responsive, and only starting one worker resolves the problem.

Has anybody seen anything like this?

#Trim

# Create all of the end points
endpoints = []
log_pipes = []
for w_num in range(5):

(recv, snd) = multiprocessing.Pipe(False)
# Start the worker
logger.debug("Creating the worker {0}".format(w_num))
worker = Worker(w_num, job_name, gm_servers, snd)

# Add the worker to the list of endpoints so it can be started
endpoints.append(worker)
log_pipes.append(recv)

# Trim starting endpoints

try:
while True:
time.sleep(1)

pipe_logger(logger, log_pipes)
except (KeyboardInterrupt, SignalQuit):
pass

# Trim cleanup

def pipe_logger(logger_obj, pipes):
done =
False
while not done:
done =
True
for p in
pipes:
if p.poll(): # <-- Returning true after a previous
pipe actually had data

try:
log_level, log_msg = p.recv() # <-- Hanging
here
except EOFError:
continue
done = False
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,772
Messages
2,569,593
Members
45,111
Latest member
VetaMcRae
Top