May 8, 2009 at 9:58 am
I'll accept the devils advocate position. But, can you make it fail when the optimistic concurrency features are not in use.
At this point, we actually don't know what transaction isolation level is being used by the OP as I don't recall that being stated. I do recall SERIALIZABLE being mentioned, but not necessarily stated that it was used.
May 8, 2009 at 10:10 am
Paul, nice test. I was wondering if that was a hole, and you've shown it.
I still think the best thing is to run the insert, then check for a rowcount of 1 (or 2) and then commit or rollback (and retry).
That's likely the only way to ensure no duplicates occur.
But I'm concerned about the design that potentially allow duplicates. It seems that there might be a better way to design this if we understood how the IDs are being selected.
May 8, 2009 at 10:11 am
Lynn Pettis (5/8/2009)
I'll accept the devils advocate position. But, can you make it fail when the optimistic concurrency features are not in use.At this point, we actually don't know what transaction isolation level is being used by the OP as I don't recall that being stated. I do recall SERIALIZABLE being mentioned, but not necessarily stated that it was used.
Sure - though ALLOW_SNAPSHOT_ISOLATION is the optimistic concurrency isolation level. READ_COMMITTED_SNAPSHOT is just another isolation level with different characteristics. I know what you mean though: writers not blocking readers and vice-versa is sort-of-optimistic.
I know it was a long post, but I did give several examples where the construct 'fails' (in a totally expected way) without READ_COMMITTED_SNAPSHOT.
It is time-consuming to set up the demo script, and it is 4am here, so I will point the interested reader to the following links in lieu of that:
Craig Freedman:
http://blogs.msdn.com/craigfr/archive/2007/05/22/read-committed-and-updates.aspx
MSDN:
http://msdn.microsoft.com/en-us/library/ms190805.aspx - especially the section on missed and double reads.
Alex Kuznetsov:
Tony Rogerson:
http://sqlblogcasts.com/blogs/tonyrogerson/archive/2006/11/16/1345.aspx
Cheers,
Paul
May 8, 2009 at 10:20 am
Steve Jones - Editor (5/8/2009)
Paul, nice test. I was wondering if that was a hole, and you've shown it.I still think the best thing is to run the insert, then check for a rowcount of 1 (or 2) and then commit or rollback (and retry). That's likely the only way to ensure no duplicates occur.
But I'm concerned about the design that potentially allow duplicates.
Thank you!
I agree that the design is good inasmuchas it offers plenty of scope for improvement 😉
Running the insert and then immediately checking for COUNT(*) > 1 is *more* prone to error under read committed than the original approach, as it scans more rows. There is a bigger window of misopportunity.
Whichever way you slice it, there is nothing to prevent concurrent modifications occurring behind your COUNT(*) scan point, nothing to prevent a page-split moving rows from in front of the scan to behind it...and so on.
The *only* way (that springs to mind) to be absolutely sure is to scan at SERIALIZABLE. And that guarantee ends with the transaction.
In the real world, of course, we tolerate 'inaccurate' (whatever that means in a massively-concurrent system) aggregate totals, and the odd missed row or double-counted value. I don't want everyone to think I am saying we must all use very high isolation levels in these situations - I'm just saying, and that's all 🙂
Paul
May 8, 2009 at 10:23 am
Apologies if I am spamming here :hehe:
Lynn, yes Jamie did mention SERIALIZABLE - but he said he had looked at it and didn't think it met his requirements (:confused:)
Paul
May 8, 2009 at 10:25 am
For clarity, note that Bob's example also fails if READ_COMMITTED_SNAPSHOT is on.
Are you real sure about that Paul?
Did you try it?
Would somebody else try it? Because I just created a new DB, and added
ALTER DATABASE SandBox_BobH
SET READ_COMMITTED_SNAPSHOT ON;
It's behaving the same on my machine.
__________________________________________________
Against stupidity the gods themselves contend in vain. -- Friedrich Schiller
Stop, children, what's that sound? Everybody look what's going down. -- Stephen Stills
May 8, 2009 at 10:26 am
Paul White (5/8/2009)
Bob Hovious (5/7/2009)
Jamie, are you thinking that you have to somehow code the locks yourself?Jamie,
At the same time as *not* necessarily recommending this as a solution in this case, there are times where application locks are appropriate.
Check out:
sp_getapplock
sp_releaseapplock
APPLOCK_MODE
APPLOCK_TEST
in Books Online.
While not disagreeing with anything that Paul is saying here, there are at least two other workable approaches that I would strongly recommend before wading into the tar pit of explicit application-managed locking. (I beleivve that both of these have been mentioned or at least hinted at here already)
First, "Optimistic Retry", Use Florian's basic query, but wrap it in a TRY..CATCH and in the catch, just try it again. If necessary code it with some fixed number of retry attempts and possible with a very small WAIT in-between. I have used this many times with great success.
Or secondly, (the "pessimistic" approach) just use Serializable isolation. But understand, there's a reason that hardly anyone ever uses it: it can be dog-slow.
My personal preference is almost always for Optimistic-concurrency approaches.
[font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
Proactive Performance Solutions, Inc. [/font][font="Verdana"] "Performance is our middle name."[/font]
May 8, 2009 at 10:28 am
Paul,
The problem we are having is that the OP is apparently dismissing everything without considering it in context. Yes, I have tested your devils advocate position and if you are using the optimistic concurrency options, our solution will fail. We aren't sure what the environment in the OP's case is, are they using READ SNAPSHOT ISOLATION? If so, then yes more needs to be done to ensure that duplicate data isn't inserted. It is, however, up to the OP to provide us with ALL the relevent information required to provide the best possible answer.
Based on the information given, and some of the testing done (and yes, it hasn't been extensive) we have seen that what was proposed will work under the assumpation we have made.
Lacking information means less then accurate advice.
May 8, 2009 at 10:35 am
Lynn,
Don't get me wrong - I am sure that the NOT EXISTS solution is fine for all general purposes, and it is very similar to the approach I would take.
Nevertheless, my posted comments stand. I gave an example using READ_COMMITTED_SNAPSHOT because that is easy to repro.
My comments below the === separator in the mega-post, and the links I posted show how rows can be missed or double-counted at READ COMMITTED or even REPEATABLE READ.
In fact, READ_COMMITTED_SNAPSHOT is the level of choice for computing aggregates (see Alex's blog link).
My point is that I think the OP has a valid technical concern (whether he realizes it exactly or not). I am also taking the opportunity to highlight aspects of the behaviour of the common isolation levels which are not as widely known as they might be.
I also appreciate that Jamie hasn't necessarily expressed himself as we might wish - but that doesn't mean that we should react in kind...?
I am finding it difficult to express my thoughts clearly, but hopefully you can read between the lines?
Paul
May 8, 2009 at 10:40 am
Bob Hovious (5/8/2009)
For clarity, note that Bob's example also fails if READ_COMMITTED_SNAPSHOT is on.
Are you real sure about that Paul?
Did you try it?
Would somebody else try it? Because I just created a new DB, and added
ALTER DATABASE SandBox_BobH
SET READ_COMMITTED_SNAPSHOT ON;
It's behaving the same on my machine.
I tried here at home and got the results Paul mentioned, duplicate entries. When I turned READ_COMMITTED_SNAPSHOT OFF and turned ALLOW_SNAPSHOT_ISOLATION ON, and ran the updates using READ_COMMITTED transaction isolation level, I got only one row inserted between the two sessions.
Under the ALLOW_SNAPSHOT_ISOLATION ON, you use SET TRANSACTION ISOLATION LEVEL SNAPSHOT, then it fails again, as I did get two duplicate rows.
May 8, 2009 at 10:43 am
I must be omitting something then. Lynn could you please send me exactly what you ran?
__________________________________________________
Against stupidity the gods themselves contend in vain. -- Friedrich Schiller
Stop, children, what's that sound? Everybody look what's going down. -- Stephen Stills
May 8, 2009 at 10:49 am
Stepping back a moment, dealing with difficult users - the flip side (yes, my post dealing with us (me)).
The best way to "defeat" us is with information. The more we know about the OPs environment, the better answers we can give without being "aggressive". The more we have to guess, the less accurate we will be in our answers and responses to concerns that aren't expressed well or with sufficient detail.
As my blog indicated, tell us everything we may need to know to best help you. If it is suggested to do some reading or other research, do it and then come back with more questions looking for clarification if that is what is needed. But when you do that, you still need to provide more information so we can provide answers. Don't just dismiss what has been presented without providing something in support.
This is one of the problems with forums, the one-on-one personal interaction isn't there. What we may think is simple esplaination, someone else may take as arrogance. Again, more information will help.
May 8, 2009 at 10:52 am
Bob Hovious (5/8/2009)
Are you real sure about that Paul?Did you try it?
Would somebody else try it? Because I just created a new DB, and added
ALTER DATABASE SandBox_BobH
SET READ_COMMITTED_SNAPSHOT ON;
It's behaving the same on my machine.
Bob,
Yes I am *really* *really* sure.
Yes I did try it. I am hurt that you think I would post what I did without trying it 😛
Lynn has confirmed, I think.
Not sure what to suggest first, but maybe check:
SELECT is_read_committed_snapshot_on
FROM sys.databases
WHERE [name] = 'SandBox_BobH'
BTW there is a small bug-ette in your code which asks us to select the value from a local variable after it has gone out of scope, IIRC.
Cheers,
Paul
May 8, 2009 at 10:56 am
Lynn Pettis (5/8/2009)
When I turned READ_COMMITTED_SNAPSHOT OFF and turned ALLOW_SNAPSHOT_ISOLATION ON, and ran the updates using READ_COMMITTED...
You need to SET TRANSACTION ISOLATION LEVEL SNAPSHOT to use that level. READ COMMITTED behaves completely as normal when ALLOW_SNAPSHOT_ISOLATION is on.
Under SNAPSHOT (not READ_COMMITTED_SNAPSHOT!) I would expect you to get duplicate rows. I think.
I am going to test that assertion now 🙂
May 8, 2009 at 10:58 am
Lynn Pettis (5/8/2009)
Stepping back a moment, dealing with difficult users...
I agree entirely with your comments there.
Though it is said:
"Let (s)he who is without pork chop, throw the first pork chop"
...or something like that :laugh:
Viewing 15 posts - 46 through 60 (of 92 total)
You must be logged in to reply to this topic. Login to reply