Atomic read/writes in SQL Server

J

JohnB

I'm about to start working on an (Internet) web server whose processing is
pretty routine (note that all users with be anonymous). Web forms are
read/writing to the (SQL Server) DB and everything is hooked up using
standard ASP.NET techniques (web controls wired to my business layer objects
calling into the DB using my data layer objects). Do I need to deal with
mutexes or critical sections for updating single records from a given table
or even single records in different tables? I'll be applying transaction
processing for the latter case and optimistic concurrency in general but
what about two users trying to read/write at the same time? Can they clobber
each other in any way I need to be aware of. What about cascading updates
and deletes? Thanks for your help.
 
S

sloan

Google

ASP.NET Optimistic Locking (which will usually go with timestamp/rowversion
solutions)
ASP.NET Pesstimistic Locking

There are a few ways. I still rely on TSQL BEGIN TRAN/ COMMIT TRAN /
ROLLBACK TRAN for my ASP.NET applications.

I would get the EnterpriseLibrary.Data (3.1 or 4.1) and call stored
procedures.
But that's me, there are different methods.

If you have Sql Server 2008, you can also look at the UPSERT/MERGE commands.
http://pratchev.blogspot.com/2008/03/upsert-and-more-with-merge.html
 
E

Erland Sommarskog

JohnB said:
I'm about to start working on an (Internet) web server whose processing
is pretty routine (note that all users with be anonymous). Web forms are
read/writing to the (SQL Server) DB and everything is hooked up using
standard ASP.NET techniques (web controls wired to my business layer
objects calling into the DB using my data layer objects). Do I need to
deal with mutexes or critical sections for updating single records from
a given table or even single records in different tables? I'll be
applying transaction processing for the latter case and optimistic
concurrency in general but what about two users trying to read/write at
the same time? Can they clobber each other in any way I need to be aware
of. What about cascading updates and deletes? Thanks for your help.

SQL Server will take care of locking, so two users cannot update the same
row simultaneously, nor can a user read a partially updated row or anyhing
like that. If you perform updates to several tables that need to be atomic,
you need to define transactions for those.

Still there are some tricky parts: what if user1 reads a row, user2 reads
the same, and user1 then updates, and next user2 updates but from stale
data. This is usually dealt with optimistic concurrency, for instance with
timestamp (a.k.a rowversion) columns.

There are also more intricate scenarios where you need to make sure that
no one changes the data you have read, but they are less common.

Mutexes or critical sections is nothing you should use. Not the least
because they are client-side devices, so they will not protect you from
what other client processes do.

--
Erland Sommarskog, SQL Server MVP, (e-mail address removed)

Links for SQL Server Books Online:
SQL 2008: http://msdn.microsoft.com/en-us/sqlserver/cc514207.aspx
SQL 2005: http://msdn.microsoft.com/en-us/sqlserver/bb895970.aspx
SQL 2000: http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
 
J

JohnB

SQL Server will take care of locking, so two users cannot update the same
row simultaneously, nor can a user read a partially updated row or anyhing
like that. If you perform updates to several tables that need to be
atomic,
you need to define transactions for those.

Thanks. That's was I assumed but needed confirmation.
Still there are some tricky parts: what if user1 reads a row, user2 reads
the same, and user1 then updates, and next user2 updates but from stale
data. This is usually dealt with optimistic concurrency, for instance with
timestamp (a.k.a rowversion) columns.

Yes, I'm aware of this and will look after it accordingly.
There are also more intricate scenarios where you need to make sure that
no one changes the data you have read, but they are less common.

This is where it starts to sound tricky. If user1 deletes a parent record
with cascading updates/deletes for instance (on its children), what happens
to those children if others are simultaneously trying to read/write to them
as well. Presumably you need to start a transaction for this even though
you're directly touching the parent record only (to rollback changes to the
parent *and* its children in case of failure - is this correct?), but what
about synchronization with other clients. Will a transaction take care of
any synchronzation problems. I'm assuming so or most programmers would have
a lot of problems on their hands (since it's difficult to get this logic
right yourself).
Mutexes or critical sections is nothing you should use. Not the least
because they are client-side devices, so they will not protect you from
what other client processes do.

I was actually referring to a common set of read/write routines on the
server which all client DB activity would get routed through. I didn't think
this was necessary and it's likely not (based on what you've now said).

Thanks for your help (it's appreciated).
 
E

Erland Sommarskog

JohnB said:
This is where it starts to sound tricky. If user1 deletes a parent
record with cascading updates/deletes for instance (on its children),
what happens to those children if others are simultaneously trying to
read/write to them as well. Presumably you need to start a transaction
for this even though you're directly touching the parent record only

So the assumption here is that the FKs are setup with ON DELETE CASCADE?

If they are not, you will need multiple statements, and you should have
a user-defined transaction.

If there are cascading FKs and you only need a single DELETE statement,
SQL Server will define a system transaction. A statement in SQL Server
is always atomic. But it is never wrong to add your own transaction.

Now, to what happens if there are simulaneous read operations, it gets
more complex, not the least because there something called transaction
isolation level that you can choose.

In the default more READ COMMITTED, I will have to admit that I don't
know exactly what will happen, because I have not studied cascading
constraints very carefully. If you were to run the deletes yourself,
a reader may be able to read the parent, alhtough you have deleted some
of the children. With cascading deletes, it is possible that SQL Server
takes out an exclusive lock on the parent row first thing, so readers
trying to read the parent would be blocked until the delete has completed.
As for the children, I would expect SQL Server to delete these first,
and the parent last (but I have not verified this), why it would not
be possible to see any dangling children.

Now, there is a database setting that is fairly popular which is known
as READ COMMITTED SNAPSHOT ISOLATION. In this mode, readers read data
from the version store if a row is blocked. This means that if a reader
access any of the parent or the children why the delete is going on,
will see the entire tree as it was before the DELETE started.

Then you can read with NOLOCK, in which case you will everything in
the flux of the moment. Don't go there.




--
Erland Sommarskog, SQL Server MVP, (e-mail address removed)

Links for SQL Server Books Online:
SQL 2008: http://msdn.microsoft.com/en-us/sqlserver/cc514207.aspx
SQL 2005: http://msdn.microsoft.com/en-us/sqlserver/bb895970.aspx
SQL 2000: http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
 
J

JohnB

So the assumption here is that the FKs are setup with ON DELETE CASCADE?

Yes
If they are not, you will need multiple statements, and you should have
a user-defined transaction.

If there are cascading FKs and you only need a single DELETE statement,
SQL Server will define a system transaction. A statement in SQL Server
is always atomic. But it is never wrong to add your own transaction.

Now, to what happens if there are simulaneous read operations, it gets
more complex, not the least because there something called transaction
isolation level that you can choose.

In the default more READ COMMITTED, I will have to admit that I don't
know exactly what will happen, because I have not studied cascading
constraints very carefully. If you were to run the deletes yourself,
a reader may be able to read the parent, alhtough you have deleted some
of the children. With cascading deletes, it is possible that SQL Server
takes out an exclusive lock on the parent row first thing, so readers
trying to read the parent would be blocked until the delete has completed.
As for the children, I would expect SQL Server to delete these first,
and the parent last (but I have not verified this), why it would not
be possible to see any dangling children.

Now, there is a database setting that is fairly popular which is known
as READ COMMITTED SNAPSHOT ISOLATION. In this mode, readers read data
from the version store if a row is blocked. This means that if a reader
access any of the parent or the children why the delete is going on,
will see the entire tree as it was before the DELETE started.

Then you can read with NOLOCK, in which case you will everything in
the flux of the moment. Don't go there.

Thanks for clarfiying things. From what you're telling me then, with
cascading deletes handled at the DB level itself (or cascading updates of
FKs to DBNull or the default value), any single UPDATE or DELETE statement
is atomic so there's no need to worry. That is, I simply need to apply
optimistic or pessissmistic concurrency whenever my app requires it (usually
the former for most), as well as transactions when performing *multiple*
UPDATE or DELETE statements (not required for a single UPDATE or DELETE),
and everything should work. There will be no problems IOW if someone else is
simultaneously trying to UPDATE, DELETE or even read the parent record
and/or any of its children (i.e., no unexpected collisions, undefined timing
problems, orphaned children, etc.). This all assumes the cascades are being
handled by the DB itself of course (I'm not rolling my own that is).

I'll have to do more research to get a better grip on things because how
this all works under the hood is still fuzzy. On the surface it seems there
are an enormous number of potential problems when many tables are linked via
FKs and various cascading relationships. People can be simultaneously
inserting, updating and reading records from these tables all over the place
and it appears (at least superfically) extremely complex (i.e., demanding on
the programmer to account for some of these scenarios themselves). I'll have
to give it some deeper thought but you've been very helpful. Thanks again.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,015
Latest member
AmbrosePal

Latest Threads

Top