March 2, 2012 at 3:14 pm
I realized the hard way that just wrapping your statements in code in a transaction block doesn't suffice sometimes. Below is what I have and I want to know, the unintended behavior is because of my isolation level on the db.
Pseudo code that I have while adding a record to a table
transaction
{
-----------select stmnt to check if another record already exists based on some criterion;
-----------if prev record doesn't exist
-----------{
----------------insert stmnt to add a new record
-----------}
}
In the database I ended up having the new record even if a previous record existed. The new and previous got added within a span of 1 sec. (lets say user double click). It is as if I don't check for prev record existence.
Can anyone please confirm that this behavior is exactly because of my db being 'read committed'?
March 3, 2012 at 4:58 am
Not just because of the isolation level. Read committed releases the shared locks on rows read after the read completes, but also read locks are shared, so two sessions can read for the same row at the same time and both see that it's not there, then both insert it. You could switch to repeatable read or serialisable, but then your code would be very susceptible to deadlocks.
A better pattern (that doesn't even need a transaction) would be
Insert into table (<columns>)
select <values>
WHERE NOT EXISTS (<check for the row existing>)
and to be paranoid, put a updlock hint in the EXISTS subquery.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
March 3, 2012 at 7:00 am
Got it. Thank you very much.
Viewing 3 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply