python-daemon interaction with multiprocessing (secure-smtpd)

G

Grant Edwards

With Python 2.7.5, I'm trying to use the python-daemon 1.6 and its
DaemonRunner helper with the seucre-smtpd 1.1.9 which appears to use
multiprocessing and a process pool under the covers. There seem to be
a couple process issues:

1) The pid file created by DaemonRunner dissappears. This seems to
happen when the SMTP client closes the connection without saying
goodbye first. The process who's PID was in the pid file before
it vanished is still running (as is the pool of worker processes),
and they are still accepting connections and working as it should.

Has anybody else had any luck with DaemonRunner and pidfiles?

2) When DaemonRunner kills the "lead" process (the parent of the
worker pool), the worker pools stays alive and continues to handle
accept and handle requests. [I've tried kill -TERM and -QUIT on
the lead process by hand with the TERM signal, and got the same
results: the worker pool continues to run.]

How so you terminate a Python program that's using multiprocessing?
 
G

Grant Edwards

With Python 2.7.5, I'm trying to use the python-daemon 1.6 and its
DaemonRunner helper with the seucre-smtpd 1.1.9 which appears to use
multiprocessing and a process pool under the covers. There seem to be
a couple process issues:

1) The pid file created by DaemonRunner disappears. This seems to
happen when the SMTP client closes the connection without saying
goodbye first.

Hmm. After some further testing, it looks like it often disappears as
soon as the first connection is accepted (which I think is when the
first worker process is created).
How do you terminate a Python program that's using multiprocessing?

It looks like you have to kill all the threads individually. :/
 
G

Grant Edwards

As I understand it, the ‘multiprocessing’ module
<URL:https://docs.python.org/3/library/multiprocessing.html> does not
create multiple threads; it creates multiple processes.

Right. I should have written processes rather than threads.
It also closely follows the API for the ‘threading’ module. That
includes the ability to manage a pool of workers
<URL:https://docs.python.org/3/library/multiprocessing.html#module-multiprocessing.pool>.

Except when you kill the parent of a bunch of threads, they all get
killed. That doesn't seem to be the case for multiprocessing.
You can ask the pool of workers to close when they're done
<URL:https://docs.python.org/3/library/multiprocessing.html#multiprocessing.pool.Pool.close>.
Does that address the requirement?

I'm not sure. It's not really my code that's creating and managing
the pool: that's happening inside the secure-smtpd module from
https://github.com/bcoe/secure-smtpd. There's a little bit of wrapper
code that configures the server and then starts it -- after that I
don't have much control over anything.

Mainly, I'm just trying to figure out the right way to terminate the
server from an /etc/init script.
 
A

Antoon Pardon

op 07-05-14 21:11, Grant Edwards schreef:
Mainly, I'm just trying to figure out the right way to terminate the
server from an /etc/init script.
As far as I understand you have to make sure that your daemon is a proces
group leader. All the children it will fork will then belong to its
proces group. You can then normally kill all process with pkill -g ...
 
G

Grant Edwards

With Python 2.7.5, I'm trying to use the python-daemon 1.6 and its
DaemonRunner helper with the seucre-smtpd 1.1.9 which appears to use
multiprocessing and a process pool under the covers. There seem to be
a couple process issues:

1) The pid file created by DaemonRunner dissappears. [...]

2) When DaemonRunner kills the "lead" process (the parent of the
worker pool), the worker pools stays alive and continues to handle
accept and handle requests. [...]

I've tracked both these problems down to a single bug in secure_smtpd.

The secure_smtpd server is a normal asyncore server until the first
connection arrives. At that point, it creates a bunch of
multiprocessor worker processes _without_ the daemon flag which all
loop doing blocking accept() calls. The parent process then shuts
down the asyncore server and returns.

When the parent process returns, DaemonRunner removes the pidfile,
since it thinks the server has terminated. However, the parent
process never actually dies -- it just hangs until all the children
terminate.

At this point, killing the parent process (whose PID _was_ in the
pidfile, and is now idle) does nothing: even though the parent process
gets killed, the worker processes keeps working.

The fix is to

1) Change secure_smtpd to create the worker processes with daemon=True

2) After asyncore.loop() returns (which means the children have been
created), do a while 1: time.sleep(1) to wait for SIGTERM.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,743
Messages
2,569,478
Members
44,899
Latest member
RodneyMcAu

Latest Threads

Top