More than pessimistic record locking needed...

M

master

Actually, it is not only the record locking, what I need, and nobody seems
to descibe this.

Imagine the following scenario.
There is a database with, say 10000 records with some unvalidated data. And
there is an Intranet ASP.NET application which is an interface to the
database in question... and there are 100 pretty girls eager to... uhmm...
use the application and validate the data of course ;-).

The task is to enable the data validation in such a way that two girls NEVER
get the same record for validation (it is just a waste of time). When a girl
presses 'next' or something like that, she gets the access to an unvalidated
(and not being validated) record. This is more than pessimistic locking,
actually. And I missed the most important part - all this has to work
smoothly ;-).

I am thinking about a few solutions to this problem, I developed one once,
actually, basing on DB transactions with highest isolation level, but this
was not very smooth. Anyway, if anybody can give me a good advice or direct
to a good article, I will be very grateful.

Keep in mind that I might use synchronisation NOT on the DB level, but on
the level of the web application... maybe this would be faster?

Specifically, I think that I might keep a specialised locking object, say
"theLocker" in the application cache, to keep the hashtable with locked
records and users that locked them and incorporate some logic on this
hashtable.

I would use something like (I am writing in C#):
lock (theLocker) {
// lock / unlock the desired record
}

This is just an idea... what do you think about it?

Any comments will be welcome
DW.
 
J

John Timney \(MVP\)

How about a simple suggestion!

Have you thought about each client marking a record as taken, so no select
yet - just an update to say that record is mine for however long I need it.
The client can generate a GUID or timestamp to place inside the record and
any record with an exiting GUID is off limits. When your next client wants
a record for validation, they update the next record that is not allocated a
GUID with their own GUID. That way they can select that record as many
times as they like with no interference from any other client and it matters
little that your client may be in a world disconnected from your database.
Of course you'll need to think about releasing records from dead clients but
thats a different problem.
 
M

master

Provided I understood you correctly, I tried a similar solution in fact.

All my tables have an 'ID' field, so I added an additional tLocks table
whose records contained a user name, a table name and a record ID. Then
handling the 'next' button was something like:

- begin a transaction,
- delete a previous tLocks record for a current user, if it existed,
- select first not yet validated record from the required table for which no
record exists in tLocks,
- add a new tLocks record for the current user, the requested table and the
appropriate record ID,
- commit the transaction.

Looks OK, doesn't it?

Well, while using the default transaction level, it was very commonplace
that two users happened to lock the same record. Changing the level to the
highest possible value seemed to help... for 2-4 users. When 10 users used
the database simultaneously, they again happened to get the same records at
a time :-/.

Well, I do not understand why this happened, but it did ;-).

Frankly speaking, I would be glad to read some article providing a few
possible solutions to choose from. Or maybe a description of some system
that was tested and worked under a real stress...

DW.
 
M

master

Adding a timeout checking to my solution seems quite easy - a datetime field
should be added to tLocks table, each record will be added with the current
time. Each transaction would then start with deleting records that are
timed-out, i.e. the records whose datetime field is older than a session
timeout...

That would be nice, provided the whle solution worked ;-). But the users
reported it did not :-/.

This is why I am thinking about handling the locking not by the DB but by
the web application instead, using a global application cache and a lock
statement. I assume that each http session has a separate thread, am I
right?

Additionally, this solution should be a lot faster than blocking the
database with highly isolated transactions...

DW
 
B

bruce barker \(sqlwork.com\)

you don't say which database you are using but you are running into
concurrency issues. if two (or more) users run your sql at the same time,
they both get the same unvalidated record, and both see its not in the locks
table, then both try to add it. then transactions logic only guarantees that
if the insert fails, the delete will be rolled back.

you should move the select and insert into one statement (though depending
on the database vendor this may not work). use a table lock or set the
transaction isolation level serialiable. see the lock options for your
vendor. what you need to do is take an exclusive lock at the start of your
transaction to prevent others fromn reading the table during your
transaction).

-- bruce (sqlwork.com)
 
J

John Timney \(MVP\)

master said:
Adding a timeout checking to my solution seems quite easy - a datetime
field
should be added to tLocks table, each record will be added with the
current
time. Each transaction would then start with deleting records that are
timed-out, i.e. the records whose datetime field is older than a session
timeout...

Potential progress then!
That would be nice, provided the whle solution worked ;-). But the users
reported it did not :-/.
This is why I am thinking about handling the locking not by the DB but by
the web application instead, using a global application cache and a lock
statement. I assume that each http session has a separate thread, am I
right?

Each session has its own thread safe object but I'm not sure holding
thousands of records in application or session is a good idea. Thats a lot
of work for a collection when a database is probably better optimised for
this type of processing.
Additionally, this solution should be a lot faster than blocking the
database with highly isolated transactions...

Yes - perhaps. More than 1 query can return a select so you can almost
guarentee at some point on a heavy database that you will get concurrency
issues, and you would essentially be recreating db functionality in asp.net.
With the correct locking inside a transaction SQL Server places an exclusive
lock on the page (or the row) so unique marking should be possible - leaving
you free to select exclusively and not worry about update problems due to
work in progress time passing, and reducing your memory overhead in
asp.net..

If I was you, I'd pop into one of the SQL server newsgroups and ask their
advice.
 
M

master

Each transaction would then start with deleting records that are
Potential progress then!

What do you mean? Could you comment more on this?

Each session has its own thread safe object but I'm not sure holding
thousands of records in application or session is a good idea. Thats a lot
of work for a collection when a database is probably better optimised for
this type of processing.

Actually, there will never be THAT many. I do not think more than 100 users
will use the application at the same time.

If I was you, I'd pop into one of the SQL server newsgroups and ask their
advice.

I did ;-)

DW
 
M

master

you don't say which database you are using but you are running into
concurrency issues.

MS SQL Server.

if two (or more) users run your sql at the same time,
they both get the same unvalidated record, and both see its not in the locks
table, then both try to add it. then transactions logic only guarantees that
if the insert fails, the delete will be rolled back.

Yes, that was exactly the behaviour I observed while using the default
isolation level.
However, even when I changed it to 'serializable', some users reported that
they have problems getting the same records, when the sytem was under a
stress. I do not understand why... maybe I just do not understand the
concept of a transaction enough...

what you need to do is take an exclusive lock at the start of your
transaction to prevent others fromn reading the table during your
transaction).

Frankly speaking, I do not know how to lock the table in MS SQL Server,
though I searched the books online for this...
I am much more an application programmer than a DB programmer...

DW
 
J

John Timney \(MVP\)

master said:
What do you mean? Could you comment more on this?

I only mean that this at least appears to be something you have cracked and
therefore dont have to worry about!
Actually, there will never be THAT many. I do not think more than 100
users
will use the application at the same time.

If that is the case, then for so few users, I would certainly look at
updating the field as I suggested, and keeping your solution very simple
indeed. If you cant get it working with that simple approach then the
application object and its lock method would help you maintain some
concurrency independence I expect. Dont even bother with session objects.
I did ;-)

Hope they offerered some good advice, theres some very techncial people in
there.


Regards

John Timney (MVP)
 
M

master

Each transaction would then start with deleting records that are
I only mean that this at least appears to be something you have cracked and
therefore dont have to worry about!

Now I understand even less than before ;-)
I thought you found some danger in the strategy I had presented... which I
am not aware of. Then I would like to know it.

Hope they offerered some good advice, theres some very techncial people in
there.

Well, they directed me what to search for as far as the locking of data is
concerned... this does not seem to be well documented in the SQL Server
books online. There are no examples, especially...

DW.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top