PEP 249 - DB API question

K

k3xji

Hi all,

As development goes on for a server project, it turns out that I am
using the MySQLDB and DB interactions excessively. One questions is
just bothering me, why don't we have a timeout for queries in PEP 249
(DB API)?

Is it really safe to wait for a query to finish, means, is it always
returning, even if the DB server goes down?

And, also from my point view, it may be a good feature. We may use
long/non-critical DB queries with a timeout and slow/critical without
a timeout. This will give us a little chance to prioritize/consume
queries on their criticality? And, I don't see so much effort in
implementing this. One has to change the socket logic in the related
DB's API source code?

What do you think?

Thanks
 
J

James Mills

As development goes on for a server project, it turns out that I am
using the MySQLDB and DB interactions excessively. One questions is
just bothering me, why don't we have a timeout for queries in PEP 249
(DB API)?

Because not all database engines support this ?
Is it really safe to wait for a query to finish, means, is it always
returning, even if the DB server goes down?

Try using the non-blocking features (may be RDBMS specific)
And, also from my point view, it may be a good feature. We may use
long/non-critical DB queries with a timeout and slow/critical without
a timeout. This will give us a little chance to prioritize/consume
queries on their criticality? And, I don't see so much effort in
implementing this. One has to change the socket logic in the related
DB's API source code?

Patches are welcome. A suggestion:

Try spawning a new process to run your query
in. Use the multiprocessing library. Your main
application can then just poll the db/query processes
to see if they're a) finished and b) have a result

Your application server can also c0 kill long running
queries that are "deemed" to be taking "too long"
and may not finish (eg: Cartesian Joins).

--JamesMills
 
M

M.-A. Lemburg

Hi all,

As development goes on for a server project, it turns out that I am
using the MySQLDB and DB interactions excessively. One questions is
just bothering me, why don't we have a timeout for queries in PEP 249
(DB API)?

Is it really safe to wait for a query to finish, means, is it always
returning, even if the DB server goes down?

And, also from my point view, it may be a good feature. We may use
long/non-critical DB queries with a timeout and slow/critical without
a timeout. This will give us a little chance to prioritize/consume
queries on their criticality? And, I don't see so much effort in
implementing this. One has to change the socket logic in the related
DB's API source code?

What do you think?

This would be a question for the Python DB-SIG mailing list.

Things like timeouts and handling of these is generally something
that is very database specific. It is difficult to provide a reliable
way of configuring this and may very well not even be within the
scope of a database API (e.g. because the timeout has to be
configured in the database server using some config file).

I'd suggest you check whether MySQL provides a way to set timeouts
and you then just use that for your project.

--
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source (#1, Nov 04 2008)________________________________________________________________________

:::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,MacOSX for free ! ::::


eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48
D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
Registered at Amtsgericht Duesseldorf: HRB 46611
 
K

k3xji

Try spawning a new process to run your query
in. Use the multiprocessing library. Your main
application can then just poll the db/query processes
to see if they're a) finished and b) have a result

Your application server can also c0 kill long running
queries that are "deemed" to be taking "too long"
and may not finish (eg: Cartesian Joins).

Just thinking loudly:...

More backward-compatible way to do that is to have a thread
pool of threads running queries and the main pool thread is
polling to see if the child threads are taking too long to
complete? However, from performance point of view this will
be a nightmare? You have a good reason to suggest
multiprocessing, right? But at least I can implement my
critical queries with this kind of design, as they are not
so many.

Good idea, thanks...
 
J

James Mills

Just thinking loudly:...

More backward-compatible way to do that is to have a thread
pool of threads running queries and the main pool thread is
polling to see if the child threads are taking too long to
complete? However, from performance point of view this will
be a nightmare? You have a good reason to suggest
multiprocessing, right? But at least I can implement my
critical queries with this kind of design, as they are not
so many.

I hate thread :) To be perfectly honest, I would
use processes for performance reasons, and it
were me, I would use my new shiny circuits [1]
library to trigger events when the queries are done.

--JamesMills

[1] http://trac.softcircuit.com.au/circuits/
 
L

Lawrence D'Oliveiro

Try spawning a new process to run your query in.

One approach might be to have two processes: the worker process and the
watcher process. The worker does the work, of course. Before performing any
call that may hang, the worker sends a message to the watcher: "if you
don't hear back from me in x seconds, kill me". It then does the call.
After the call, it sends another message to the watcher: "OK, I'm back,
cancel the timeout".
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,763
Messages
2,569,562
Members
45,038
Latest member
OrderProperKetocapsules

Latest Threads

Top