August 19, 2014 at 3:24 am
John Mitchell-245523 (8/19/2014)
TomThomson (8/18/2014)
There are cases where READ UNCOMMITTED isolation level is acceptable: the obvious case is the case where you know for certain, perhaps because a ban on certain actions at certain times is enforced, that nothing can insert rows, delete rows, or update rows in any table that the code running with that isolation level will touch.But then there would be no locks held that would be incompatible with SELECT statements and so READ UNCOMMITTED is unnecessary anyway. That's why I ask developers not to use it, full stop.
You forget the extra CPU and memory overhead by acquiring and releasing the locks. I wouldn't worry about reading uncommitted data if for example I wanted to count the number of active users on Facebook. NOLOCK would be perfectly fine. The only thing that would "worry" me is the "Error 601 : Could not continue scan with NOLOCK due to data movement." exception.
John Mitchell-245523 (8/19/2014)
Raghavendra Mudugal (8/19/2014)
Actually our app is running from more than a decade now, and there is no single user had complained about the incorrect dataAnd that is just as worrying as if dozens of users had complained. The only thing worse than incorrect data is incorrect data that you don't know about.
The only isolation levels that can guarantee "correct results" are SNAPSHOT and SERIALIZABLE isolation levels. You don't use them all the time do you? If you want to write a query to make sure that a double entry bookkeeping system is in balance you would't use READ_COMMITTED or REPEATABLE_READ since non-repeatable reads or phantom rows will give incorrect results. And on the other end of the scale: If you have a table that logs logon attempts, you wouldn't use SERIALIZABLE to a count of log for the current day as that would block new rows from being inserted.
In other words, all isolation levels have their pros and cons. Which one you choose depends on the business requirements for each case.
August 19, 2014 at 12:33 pm
John Mitchell-245523 (8/19/2014)
TomThomson (8/18/2014)
There are cases where READ UNCOMMITTED isolation level is acceptable: the obvious case is the case where you know for certain, perhaps because a ban on certain actions at certain times is enforced, that nothing can insert rows, delete rows, or update rows in any table that the code running with that isolation level will touch.But then there would be no locks held that would be incompatible with SELECT statements and so READ UNCOMMITTED is unnecessary anyway.
Taking locks still costs some performance even if nothing clashes with them.
Personally I mandate (where I can, and recommend where I can't mandate) that anyone concerned with the cost of taking locks that can't cause clashes just stick TABLOCK hints on every table name in every query instead of using READ UNCOMMITTED, because taking one lock per table is no big deal, but I admit that using READ UNCOMMITTED isolation level in those specific circumstances is harmless, and some shops have an anti-hint policy.
Tom
August 19, 2014 at 1:22 pm
John Mitchell-245523 (8/19/2014)
And that is just as worrying as if dozens of users had complained. The only thing worse than incorrect data is incorrect data that you don't know about.
hmm, say you have a form of 10 text boxes, you enter a sequence of numbers, one in each box, you knew the start, you knew the end, and can also make you own total of this numbers to cross check to see if the form is calculating the total. When a user logs in back and sees what they have entered is there and in all proper way.... what is there that I don't know about?
if he enters "1" in one box... he still sees 1, not 1.01 or 0.99 (as-in dirty read or uncommitted data)
if you can emphasise on that, it will be great, so I will know what I am missing and helps me to get hold of it from now on, or at least I can keep a close watch :-).
Thank you. John.
ww; Raghu
--
The first and the hardest SQL statement I have wrote- "select * from customers" - and I was happy and felt smart.
August 19, 2014 at 1:32 pm
Nils Gustav Stråbø (8/19/2014)
In other words, all isolation levels have their pros and cons. Which one you choose depends on the business requirements for each case.
Thanks, Nils, this makes a lot sense to me, at the moment. (because, I don't have anything to say specifically about nolocks are good or they are bad, just bored the isolation train, again, with new hope of exploration, so one day I will come to an acceptable conclusion.)
ww; Raghu
--
The first and the hardest SQL statement I have wrote- "select * from customers" - and I was happy and felt smart.
August 19, 2014 at 2:18 pm
Hugo Kornelis (8/18/2014)
PHYData DBA (8/18/2014)
Or you could just do lots of testing to make sure using NOLOCK is not affecting your queries results adversely and move on.Why waste time, money, and effort on things that can be cheaply tested and reliably implemented.
So how would you plan to cover all possible race conditions?
Remember that with things like this (both the effect of dirty reads and the chance of the error due to data movement) effect on timing. One millisecond difference between the timing of two transactions can make a huge difference.
There's simply no way to test this extensively. NOLOCK in production code is a ticking timebomb. Not going to happen on my servers!
BTW: error 601 has nothing to do with NOLOCK. Go have a read at some real knowledge on this.
You have to determine the correct way to test for the likely scenarios in your own application and database code. Asking me how to test for all possible race conditions in all possible code ever is like asking me to tell you the genetic code for blank.
You should know that this is only a definable thing for a live database...
August 19, 2014 at 2:22 pm
TomThomson (8/18/2014)
PHYData DBA (8/18/2014)
1. I got this question "wrong" selecting the "right" answers sent out in email the next day.How interesting. Must explain why 78% of the people answering got it "wrong".
As you said that I looked to see what the daily email on the 15th said. It said the things which, if you had checked the corresponding boxes, would have ensured that the system told you that you had got it right. Since I think it unlikely that SqlServerCentral sent you a different version of teh daily email, I imagine that you must inadvertently have checked a box that you didn't intend to check.
2. The explanation uses the natural behavior of @@TranCount to "prove" a misprint in the BOL?!
The only behaviour of trancount that is interesting in this connection is that it doesn't access any data in the database; I could equally well have chosen select 'x' as a statement that doesn't manipulate any database data; the point is that such a statement at that point executes to completion, so the transaction has not been aborted before that point; the following select statement, which does attempt to manipulate (read) database data does not execute to completion, so clearly the transaction is aborted when that statement begins, before it delivers any results.
Since the BOL page says that the transaction is aborted before it gets to the point where the extra select is inserted, it's clearly wrong. An extra set transaction isolation level statement after the one that BoL says aborts the transaction can make the transaction run tsuccessfully to completion - and there is no magic mechanism to unabort an aborted transaction, so that too makes it clear that the statement in BOL is wrong.
The BOL (and real life use) for @@TRANCOUNT gives several instances where it will not increment even when the actual live transactions change.
Is there any other example you can share that proves your point?
August 19, 2014 at 3:16 pm
PHYData DBA (8/19/2014)
TomThomson (8/18/2014)
PHYData DBA (8/18/2014)
1. I got this question "wrong" selecting the "right" answers sent out in email the next day.How interesting. Must explain why 78% of the people answering got it "wrong".
As you said that I looked to see what the daily email on the 15th said. It said the things which, if you had checked the corresponding boxes, would have ensured that the system told you that you had got it right. Since I think it unlikely that SqlServerCentral sent you a different version of teh daily email, I imagine that you must inadvertently have checked a box that you didn't intend to check.
2. The explanation uses the natural behavior of @@TranCount to "prove" a misprint in the BOL?!
The only behaviour of trancount that is interesting in this connection is that it doesn't access any data in the database; I could equally well have chosen select 'x' as a statement that doesn't manipulate any database data; the point is that such a statement at that point executes to completion, so the transaction has not been aborted before that point; the following select statement, which does attempt to manipulate (read) database data does not execute to completion, so clearly the transaction is aborted when that statement begins, before it delivers any results.
Since the BOL page says that the transaction is aborted before it gets to the point where the extra select is inserted, it's clearly wrong. An extra set transaction isolation level statement after the one that BoL says aborts the transaction can make the transaction run tsuccessfully to completion - and there is no magic mechanism to unabort an aborted transaction, so that too makes it clear that the statement in BOL is wrong.
The BOL (and real life use) for @@TRANCOUNT gives several instances where it will not increment even when the actual live transactions change.
Is there any other example you can share that proves your point?
His example of SELECT 'x' seems like a good one. I tried SELECT 'x' before and SELECT 'y' after and the first executes but the second does not.
August 19, 2014 at 4:28 pm
PHYData DBA (8/19/2014)
TomThomson (8/18/2014)
PHYData DBA (8/18/2014)
1. I got this question "wrong" selecting the "right" answers sent out in email the next day.How interesting. Must explain why 78% of the people answering got it "wrong".
As you said that I looked to see what the daily email on the 15th said. It said the things which, if you had checked the corresponding boxes, would have ensured that the system told you that you had got it right. Since I think it unlikely that SqlServerCentral sent you a different version of teh daily email, I imagine that you must inadvertently have checked a box that you didn't intend to check.
2. The explanation uses the natural behavior of @@TranCount to "prove" a misprint in the BOL?!
The only behaviour of trancount that is interesting in this connection is that it doesn't access any data in the database; I could equally well have chosen select 'x' as a statement that doesn't manipulate any database data; the point is that such a statement at that point executes to completion, so the transaction has not been aborted before that point; the following select statement, which does attempt to manipulate (read) database data does not execute to completion, so clearly the transaction is aborted when that statement begins, before it delivers any results.
Since the BOL page says that the transaction is aborted before it gets to the point where the extra select is inserted, it's clearly wrong. An extra set transaction isolation level statement after the one that BoL says aborts the transaction can make the transaction run tsuccessfully to completion - and there is no magic mechanism to unabort an aborted transaction, so that too makes it clear that the statement in BOL is wrong.
The BOL (and real life use) for @@TRANCOUNT gives several instances where it will not increment even when the actual live transactions change.
Is there any other example you can share that proves your point?
As I already pointed out, the only thing about select @@trancount that matters in this demonstration/proof is that it doesn't access any data in the database, and any other select statement with that property (I explicitly gave you the example "select 'x'") will do instead - so the BOL error is demonstrated regardless of your obsession with irrelevant (in the current context) properties of @@trancount. I don't know how to demonstrate it any more clearly, because to me this just elementary logic and I don't understand why you don't follow it.
edit: Does it help if I say that it doesn't matter whether select @@trancount returns 1, or 37, or 0, or -2, or something else; in practise it will return 1 but that is not what the demonstration is based on; what it's based on is that it the statement is executed (whatever it returns) so the previous line can't have raised a fatal transaction and batch aborting error. But the following line isn't executed whether the select @@trancount line is there or not, so it must be that following line that raises the error.
Tom
August 19, 2014 at 11:45 pm
PHYData DBA (8/19/2014)BTW: error 601 has nothing to do with NOLOCK. Go have a read at some real knowledge on this.
I suggest you do the same. I know that the error can indicate corruption, but it can also occur during scan in NOLOCK or READ UNCOMMITTED.
August 20, 2014 at 2:35 am
Raghavendra Mudugal (8/19/2014)
hmm, say you have a form of 10 text boxes, you enter a sequence of numbers, one in each box, you knew the start, you knew the end, and can also make you own total of this numbers to cross check to see if the form is calculating the total. When a user logs in back and sees what they have entered is there and in all proper way.... what is there that I don't know about?
If the users do make their own totals and cross-check against it, then yes, you can have a high degree of confidence that your results are correct. I think it's more likely, though, that they're just going to trust that they are - which they're perfectly entitled to do. Nils is right, though - you're going to have this problem to a greater or lesser extent unless you use SERIALIZABLE or SNAPSHOT.
John
Viewing 10 posts - 31 through 39 (of 39 total)
You must be logged in to reply to this topic. Login to reply