Avoiding deadlocks in concurrent programming

E

Eloff

This is not really Python specific, but I know Python programmers are
among the best in the world. I have a fair understanding of the
concepts involved, enough to realize that I would benefit from the
experience of others :)

I have a shared series of objects in memory that may be > 100MB. Often
to perform a task for a client several of these objects must be used.
Since many clients can be making requests at once (>100 per second
during peak load) these objects must be protected with some method of
thread syncronization (threading.Lock would do fine.) This is somewhat
complicated by a backup thread which takes the data every 10 minutes
and exports it all to a format that can be saved to disk. Clearly the
backup thread must have exclusive access to all of these objects at
once, and there must be no half-completed transactions underway.

The easiest way from a design stand point is have a single lock and let
clients have exclusive access to everything even although they only
ever need access to a fraction of the data. This makes it easy for the
backup thread, and it ensures that there's no trouble with deadlocks or
with partially completed transactions. However imagine what would
happen if there's 100 clients in 100 threads waiting for access to that
lock. One could be almost finished with it, and then 100 threads could
get switched in and out, all doing nothing since they're all waiting
for the lock held by the one thread.

So I think you would need multiple locks so clients only acquire what
they need. This would let multiple threads access the data at once. But
now I have to deal with deadlocks since clients will usually acquire a
resource and then block acquiring another. It is very likely that one
client locks A, another locks B, then the guy with B waits for A and
the guy with A waits for B. Worse yet the backup thread will go around
trying to lock everything and will no doubt deadlock everybody.

How do you avoid this? I thought instead of blocking forever you could
try non-blocking acquires in a loop with a timeout of a few seconds, if
the timeout is reached, release any locks held and try again later,
with the backup thread being the only one to use blocking acquires
since it would never complete it's task otherwise.

No doubt this is a common problem, how would you people deal with it?

-Dan
 
P

Paul Rubin

Eloff said:
I have a shared series of objects in memory that may be > 100MB. Often
to perform a task for a client several of these objects must be used.

Do you mean a few records of 20+ MB each, or millions of records of a
few dozen bytes, or what?
However imagine what would happen if there's 100 clients in 100
threads waiting for access to that lock. One could be almost finished
with it, and then 100 threads could get switched in and out, all doing
nothing since they're all waiting for the lock held by the one thread.

If the 100 threads are blocked waiting for the lock, they shouldn't
get awakened until the lock is released. So this approach is
reasonable if you can minimize the lock time for each transaction.
No doubt this is a common problem, how would you people deal with it?

You're doing what every serious database implementation needs to do,
and whole books have been written about approaches. One approach is
to use a transaction and rollback system like a database does.
A lot depends on the particulars of what you're doing. Are you sure
you don't want to just use an RDBMS?
 
E

Eloff

Hi Paul,
Do you mean a few records of 20+ MB each, or millions of records of a
few dozen bytes, or what?

Well they're objects with lists and dictionaries and data members and
other objects inside of them. Some are very large, maybe bigger than
20MB, while others are very numerous and small (a hundred thousand tiny
objects in a large list). Some of the large objects could have many
locks inside of them to allow simultaneous access to different parts,
while others would need to be accessed as a unit.
If the 100 threads are blocked waiting for the lock, they shouldn't
get awakened until the lock is released. So this approach is
reasonable if you can minimize the lock time for each transaction.

Now that is interesting, because if 100 clients have to go through the
system in a second, the server clearly is capable of sending 100
clients through in a second, and it doesn't matter if they all go
through "at once" or one at a time so long as nobody gets stuck waiting
for much longer than a few seconds. It would be very simple and
painless for me to send them all through one at a time. It is also
possible that certain objects are never accessed in the same action,
and those could have seperate locks as an optimization (this would
require carefull analysis of the different actions.)
You're doing what every serious database implementation needs to do ...
Are you sure you don't want to just use an RDBMS?

It was considered, but we decided that abstracting the data into tables
to be manipulated with SQL queries is substantially more complex for
the programmer and way too expensive to the system since the average
action would require 20-100 queries.

Thanks,
-Dan
 
S

Steve Horsley

Eloff said:
Hi Paul,



Now that is interesting, because if 100 clients have to go through the
system in a second, the server clearly is capable of sending 100
clients through in a second, and it doesn't matter if they all go
through "at once" or one at a time so long as nobody gets stuck waiting
for much longer than a few seconds. It would be very simple and
painless for me to send them all through one at a time. It is also
possible that certain objects are never accessed in the same action,
and those could have seperate locks as an optimization (this would
require carefull analysis of the different actions.)

It is my understanding that Pythons multithreading is done at the
interpteter level and that the interpreter itself is single
threaded. In this case, you cannot have multiple threads running
truly concurrently even on a multi-CPU machine, so as long as you
avoid I/O work while holding the lock, I don't think there should
be any performance hit using a single lock. The backup thread may
be an issue though.

Steve
 
E

Eloff

Hi Steve,

The backup thread only holds the lock long enough to create an
in-memory representation of the data. It writes to disk on it's own
time after it has released the lock, so this is not an issue.

If you're saying what I think you are, then a single lock is actually
better for performance than multiple locks so long as one avoids
waiting for other resources like I/O.

-Dan
 
K

Konstantin Veretennicov

It is my understanding that Pythons multithreading is done at the
interpteter level and that the interpreter itself is single
threaded. In this case, you cannot have multiple threads running
truly concurrently even on a multi-CPU machine

Python uses native threads.
.... print win32api.GetCurrentThreadId()
........ thread.start_new_thread(t, ())
....
1212
1804
804
1276
1792
....

- kv
 
D

Dan Sommers

On 22 Jun 2005 14:09:42 -0700,

[Paul Rubin]
It was considered, but we decided that abstracting the data into
tables to be manipulated with SQL queries is substantially more
complex for the programmer and way too expensive to the system since
the average action would require 20-100 queries.

Obviously, I only know what you've told us about your data, but 20-100
queries? That doesn't sound right. I know that I have often thought
that my data and its structure were complex, but then ended up with a
fairly straightforward table structure and pretty small queries. With
the right set of tables and a strategically placed view or two, IMO you
shouldn't need that many queries. As Paul noted, RDBMSes are well-
studied and well-understood; they are also extremely powerful when used
to their potential.

Regards,
Dan
 
P

Paul Rubin

Eloff said:
Now that is interesting, because if 100 clients have to go through the
system in a second, the server clearly is capable of sending 100
clients through in a second, and it doesn't matter if they all go
through "at once" or one at a time so long as nobody gets stuck waiting
for much longer than a few seconds. It would be very simple and
painless for me to send them all through one at a time. It is also
possible that certain objects are never accessed in the same action,
and those could have seperate locks as an optimization (this would
require carefull analysis of the different actions.)

If you can design the transactions to not need anything like I/O
waiting while a lock is being held, you may as well just use a single
lock. Even more Pythonically, have a single thread "own" the
structure and let other threads send requests through a Queue object
and get replies through another Queue. Even on a multiprocessor
system, CPython (because of the GIL) doesn't allow true parallel
threads, so it's ok to serialize access to the structure that way.

If you need/want real parallelism on a multiprocessor, see
http://poshmodule.sf.net but then you start needing fancier
synchronization.
 
N

Neil Hodgson

Eloff:
So I think you would need multiple locks so clients only acquire what
they need. This would let multiple threads access the data at once. But
now I have to deal with deadlocks since clients will usually acquire a
resource and then block acquiring another. It is very likely that one
client locks A, another locks B, then the guy with B waits for A and
the guy with A waits for B. Worse yet the backup thread will go around
trying to lock everything and will no doubt deadlock everybody.

The classic way to avoid lock cycles is to establish an order and
require that locks only be acquired in that order. There is an OK
introduction to deadlock on the wikipedia:
http://en.wikipedia.org/wiki/Deadlock

Neil
 
P

Paul McGuire

Using an RDBMS is no cure-all for deadlocks - I can just as easily
deadlock with an RDBMS as with my own threads and locks, probably
easier.

I try to pick up crumbs of knowledge from my co-workers, and one of the
smarter ones gave me this rubric for testing for deadlocks. You need 3
things to create deadlock:
1. multiple locks
2. multiple threads of execution
3. mixed access order to the locks (process 1 has lock A and wants lock
B, while concurrently process 2 has lock B and wants A)

So to avoid deadlock, invert the rubric and get rid of any one of these
elements. One proposed solution was to use a single global lock - this
would cancel out criteria number 1. An alternative might be to get rid
of multiple threads. Your current design uses a background thread, but
if this could be merged in some round-robin fashion with the other
needed processing (a generator pool perhaps?), then you could try door
#2. If you don't like either of these options, then look at ways to
ensure locks are always acquired in the same order - perhaps
alphabetically by resource name, or top-down if organized
hierarchically - to ensure that locks are never acquired in mixed
order. In the case of processes 1 and 2, if all locks were to be
acquired alphabetically, then process 2, holding B and wanting A, would
have to first release B, then request A and then re-request B. This
would break the deadlock with process 1.

I don't know if I've really offered anything new, but at least you have
some structure with which to approach your question.

-- Paul
 
E

Eloff

Thanks for all of the replies, I'm glad I posted here, you guys have
been very helpful.
Obviously, I only know what you've told us about your data, but 20-100
queries? That doesn't sound right ... RDBMSes are well-
studied and well-understood; they are also extremely powerful when used
to their potential.

Well I doubt there'd be even as many as 20 unique queries in any one
action, but 20-100 queries executed is about average, and it can be
over ten thousand for one action, clearly painfull to any RDBM or
programmer.

And as Paul McGuire points out, RDBMs don't help avoid deadlocks, in
fact it makes them easier to create in my opinion.

Paul Rubin, I think you're right, a single lock is the approach I'm
going to take. I don't want to seem stupid, but I don't understand what
you're talking about with the Queues, isn't a Queue just a synchronized
sequence? If you've got the time and you don't mind, I'd love a more
detailed explanation.

Niel, thanks for the link, I read through the article.
I try to pick up crumbs of knowledge from my co-workers, and one of the
smarter ones gave me this rubric for testing for deadlocks.

That's true, I'm going to remember that one.

Thanks a lot guys, I'm leaving on a trip tomorrow so I won't be able to
reply to this thread again, but I will read any other posts on it when
I come back.

-Dan
 
K

Konstantin Veretennicov

Even on a multiprocessor
system, CPython (because of the GIL) doesn't allow true parallel
threads, ... .

Please excuse my ignorance, do you mean that python threads are always
scheduled to run on the same single CPU? Or just that python threads
are often blocked waiting for GIL?

- kv
 
D

Donn Cave

Konstantin Veretennicov said:
Please excuse my ignorance, do you mean that python threads are always
scheduled to run on the same single CPU? Or just that python threads
are often blocked waiting for GIL?

Any thread may execute "inside" the interpreter, but not
concurrently with another.

I don't see the original point, though. If you have a C application
with no GIL, the queueing model is just as useful -- more, because
a GIL avoids the same kind of concurrency problems in your application
that it intends to avoid in the interpreter.

Rigorous application of the model can be a little awkward, though,
if you're trying to adapt it to a basically procedural application.
The original Stackless Python implementation had some interesting
options along those lines.

Donn Cave, (e-mail address removed)
 
T

Terry Hancock

Hi Paul,

It was considered, but we decided that abstracting the data into tables
to be manipulated with SQL queries is substantially more complex for
the programmer and way too expensive to the system since the average
action would require 20-100 queries.

I realize you've probably already made a decision on this, but this sounds
like a classic argument for using an *object DBMS*, such as ZODB: It
certainly does support transactions, and "abstracting the data into tables"
is a non-issue as ZODB stores Python objects more or less directly (you
only have to worry about ensuring that objects are of "persistent" types
-- meaning either immutable, or providing persistence support explicitly).
 
T

Tim Peters

[Terry Hancock]
...
I realize you've probably already made a decision on this, but this sounds
like a classic argument for using an *object DBMS*, such as ZODB: It
certainly does support transactions, and "abstracting the data into tables"
is a non-issue as ZODB stores Python objects more or less directly (you
only have to worry about ensuring that objects are of "persistent" types
-- meaning either immutable, or providing persistence support explicitly)..

ZODB can store/retrieve anything that can be pickled, regardless of
whether it derives from Persistent. There are various space and time
efficiencies that can be gained by deriving from Peristent, and ZODB
automatically notices when a Persistent object mutates, but that's
about it. Andrew Kuchling's intro to ZODB is still a good read
(Andrew doesn't work on it anymore, but I take sporadic stabs at
updating it):

http://www.zope.org/Wikis/ZODB/FrontPage/guide/index.html
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,020
Latest member
GenesisGai

Latest Threads

Top