January 25, 2007 at 9:04 am
oh well lets just upset a few more people!! I find extensive use of nolock to overcome poor programming by developers who don't understand transactions and isolation ( and data integrity ) with devs/dba's who don't know how to define effective indexing.
Probably one of the major down sides of sql server is the easy ability to do and configure really stupid things as an aid to performance tuning. Sometimes I pine for unix command line obscure syntax used to tune databases I worked on many years ago - but not for long < grin >
We all have to learn but this was a dangerous article, nolock might have the ability to be used as a hint but it's a change to isolation level.
Agreed - read only database or filegroups is best if you do read only.
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
January 25, 2007 at 3:11 pm
Thanks for the reply, DCPeterson!!
We have a datawarehousing/reporting project on the horizon. I may make that my first test run of this.
Right now, I don't think it would work for us because of the overhead. Our current systems are set up to simply use two local RAID drives, and until I can convince them to adopt my suggested model or a SAN, I don't think our current hardware could manage the tempdb load.
January 25, 2007 at 3:33 pm
Just a couple of points here:
In general I don't think that Read Committed Snapshot isolation would be all that beneficial in a DW/DSS system since you generally don't have concurrency problems there (many users reading and writing at the same time).
Everything (at least in terms of performance) is relative. So if two local drives is enough, it's enough... Don't take my caution about overhead disuade you from trying it. RCS is VERY easy to test out, and easy to undue. Also, don't get trapped into thinking that SAN technology is the only way (or even a guaranteed way) of achieving high IO throughput. SAN's are nothing magical, IO performance comes down to a few factors regardless of the buzzwords involved:
1. The speed of the disk(s)
2. The number of disks
3. RAID configuration
4. Connection latency and protocol overhead
5. Cache
That's not to say that file placement etc... isn't important for good system performance, but when looking at raw IO system capacity those are the big things regardless of whether you are looking at a single ATA drive or a multi-million dollar SAN.
I have personally seen a SAN configuration that had lower throughput than my laptop's little ATA drive. And no it wasn't due to heavy contention, it was just setup very poorly and it was using iSCSI connections and SATA drives instead of Fibre Channel.
/*****************
If most people are not willing to see the difficulty, this is mainly because, consciously or unconsciously, they assume that it will be they who will settle these questions for the others, and because they are convinced of their own capacity to do this. -Friedrich August von Hayek
*****************/
January 26, 2007 at 7:14 am
My view is that if you are not familiar with a subject you should not write articles on it.
NOLOCK uses less IO because it reads a different data structure to one not using NOLOCK. It can do this because you've said I don't care what I get back, just give me data.
I agree with colin that NOLOCK is used more often than not to cover bad database design or worse application design.
In your example you do a SELECT on date, but have no index.
If you want to increase concurrency make sure your application reads as little data as possible. If every user is going to do table scans locking will be the least or your concurrency worries. The amount of data being read will result in your buffer cache being blown, which will then mean yuor performing more physical IO and so increase query times, will max out your memory bus on your CPUs due to the amount of data the query has to process and ulitmately bring your concurrency to the floor.
NOLOCK should only be used in circumstances where you just don't give a damn about your data consitency and can't be arsed to figure out why you are considering using NOLOCK.
Simon Sabin
SQL Server MVP
http://sqlblogcasts.com/blogs/simons
January 26, 2007 at 8:25 am
Also, just because you use NOLOCK, doesn't mean that nothing is locked.
I ran into a nasty issue a few years back when a SELECT statement caused the Pages of an INDEX to be locked, and, since this index was the primary key, it blocked OLTP 😉
You can HACK this too though, with the use of sp_indexoption, but where there's smoke there's fire, so we redesigned the application that was doing the select.
Don't get me wrong, hints have their place. In general however, these hints are needed in a small number of cases, and shouldn't be used to "fix up" inefficient queries.
I think Kalen Delaney says it best in her book "Inside SQL Server 2000" - "Query hints should be used for special cases—not as standard operating procedure"
January 26, 2007 at 11:12 am
Here is my sad story about NOLOCK. I am a developer that just hired into a shop that (how do I say this...) has no DBA. I cannot believe it either.
One of the biggest problems here is locking and blocking. In an effort to resolve one possible cause we put the statement below at the top of some of our reports that were running long.
--change isolation level so do not lock out other users
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED --lasts for this connection
We did this because putting NOLOCK hints on every table was too much work, not because we really knew what we were doing.
Is there the possibility of the same kind of problem noted by Itzik Ben-Gan regarding the duplicate records? We do realize that we may have introduced the possibility of inconsistency in the records in the reports.
I would love to have the time to redesign the application to avoid the lock-escalation that I believe is causing the problem.
I would also love to see an in-depth article on locking, blocking, lock-escalation, monitoring for these, magic for preventing them without major redesign, etc… as well as the time and skills to be able to use this information to help with tuning our production database. Or I would love to see a real DBA walk thru the door!
January 26, 2007 at 12:20 pm
>> One of the biggest problems here is locking and blocking. In an effort
>> to resolve one possible cause we put the statement below at the top of
>> some of our reports that were running long.
>> We do realize that we may have introduced the possibility of inconsistency
>> in the records in the reports.
>> I would love to have the time to redesign the application to avoid the
>> lock-escalation that I believe is causing the problem.
I completely understand your dilemma. The Powers That Be need to understand that
you can quickly have (somewhat) good data integrity or quickly have (somewhat) good performance, but if you want both it takes TIME.
Given that most companies won't take the time to have performance and integrity, you are stuck with answering the question "just how much data integrity risk can I take on to get quick performance gains".
Whatever you do, clearly communicate the risks to all parties involved. If you are required to "make it fly" but not redesign, make sure those requiring it have been informed in writing of the dangers. And here's one of those dangers thats hard to quantify, but is very real: You know those issues that pop up in production but you can never effectively track down? READ UNCOMMITTED (NO LOCK) isolation level will only increase those incidents, but you'll rarely be able to pin it as being the culprit. This has a cost in client satisfaction, support time, and developer time.
January 26, 2007 at 12:44 pm
Simon, I don't entirely agree with your statement about only writing about things you know about. This isn't a referred journal and what I've always though made the community model work was that you can read the article and then get a great conversation around it that really shows you all sides. In this case I think the author did a decent bit of research on something that he found of interest and took the time to write about it, and if nothing else, sparked an interesting discussion. I'd rather encourage people to write about things they learn, did wrong, or are passionate about, and then go from there. Have we done a good job here of showing the author some concerns without fatally discouraging him from writing again?
January 26, 2007 at 12:51 pm
Wonderful perspective. Thanks for passing that on.
January 26, 2007 at 1:10 pm
As a sometime author for this site, I really agree with Andy's sentiment. I hope that Wayne will continue to contribute his experience with the rest of us. As far as it went, the article was fine, he did a good job of devising a test and sharing his results. However, when I have written articles and responded to forums I do try to keep in mind that SSC has readers of all levels of experience and appropriate caveats should be included in discussing anything that could lead a newbie headlong into trouble.
As someone who's seen plenty of "Stupid SQL Tricks" (and been guilty of perpetrating some as well) it is pretty easy to get a bit frustrated and think to myself "Great, all I need is for this to become even more widespread..."
So, Wayne if you read this, hopefully you will feel disposed to continue to contribute, but maybe just be a bit more cautious and include ample warnings when discussing things that have a potential of being abused by those who don't know any better. It would keep us old curmudgeons off your case at any rate
/*****************
If most people are not willing to see the difficulty, this is mainly because, consciously or unconsciously, they assume that it will be they who will settle these questions for the others, and because they are convinced of their own capacity to do this. -Friedrich August von Hayek
*****************/
January 26, 2007 at 1:57 pm
I think y'all are reading into this a little too deeply. Microsoft gives you the NOLOCK for a reason. Intuitively, it will give a performance gain in a neural network where multiple users and/or read/writes are occurring simultaneously. Creating the locks and queuing for competing processes creates overhead any which way you look at it. The exact performance gain probably is debatable, but it depends on the database application. Of course you shouldn't abuse NOLOCK , and the article doesn't suggest anyone should use it indiscriminitaley. I for one suggest everyone use NOLOCK when possible... ONLY when possible. Using NOLOCK and ROWLOCK has stopped deadlocking in my environment 10-fold. As a DBA/D, you have to know when locking is necessary.
January 26, 2007 at 2:57 pm
This is a very good point. I like both the article *and* the fact that there is a discussion forum for it. I also have a suggestion for alerting readers to these "hot button" discussions when needed.
I found the discussion forum after I was directed to it after rating the article. I gave the article 5 stars and then saw the 5 pages of incisive and insightful comments. I wonder now whether I would give the article 5 stars again, but since I am even more of a newbie than the author of the article is considered to be by some of the respondents, I'm happy to see what others thought about whether to use NOLOCK. I probably would not have gone to that forum to see that information without being redirected to it, and I definitely wouldn't have known that there is disagreement on the issue had some members not been motivated to send in their thoughts and corrections.
Speaking of ratings, I think that although this is not a refereed journal, people can express their assessments of articles using the rating system. I just wish that we could be made aware of which articles spark a lot of discussion.
Perhaps there can be a modification where an icon like a red fire flame appears at the start of the article that has sparked "hot topic" discussions of particular importance or passion. Ideally it would be based on an editorial decision, but if needed it could be based on the number of posts (say, greater than 2 pages of posts). The icon could also be a link to the discussion forum in addition to the regular link to the discussion forum that is listed.
Thanks again for the fascinating exchange. In some ways it is nerve-racking, because it shows how intricate databases are and how much there is to know even about things that you think the database is handling on its own, but it is good to know that there are people out there who want to share their knowledge and experience (aka hard knocks) with the rest of us.
-------------------
A SQL query walks into a bar and sees two tables. He walks up to them and asks, "Can I join you?"
Ref.: http://tkyte.blogspot.com/2009/02/sql-joke.html
January 26, 2007 at 3:12 pm
I do appreciate that what I said was somewhat harse and that was for a purpose. I should have added that if the writer is unsure about a topic then they should try and make it clear.
I am all for community content I just believe that there are varying levels at which content is consumed. Newsgroups, blogs, articles. I believe that the nature of latter allows the reader believe the content to be factually correct and good practice. For this reason the author should be aware of their responsibility.
How many have gone away a thinking that NOLOCK will increase performance?
I think my point can be made by the many posts here that have commented on how good the article is. Given the thousands that have read the article how many have gone away not aware of the issues with the article. Only 111 have voted of the 14,000+ that had read the article, and so only a few will have visited these posts.
Simon Sabin
SQL Server MVP
http://sqlblogcasts.com/blogs/simons
January 26, 2007 at 3:59 pm
Now I'm going back to my old curmudgeon self. I would counter that you aren't thinking about far too shallowly. The whole point of database management systems (not just SQL or Relational database management systems) is to provide transactional consistency for shared data stores. One of the key elements in transactions is isolation, meaning that any given transaction should not be materially affected by another.
Yes MS "gave us" NOLOCK, but they also provided the default of READ COMMITTED for a reason! It's generally not a good idea to allow dirty reads, lost updates, and duplicate rows... This is true especially in a "neural network" where many users are reading and writing data at the same time. If a user (or process) is going to be updating data it stands to reason that he/she/it should KNOW what the data is both before and after the update, no?
If, as you say, using NOLOCK reduced deadlocking in your enviornment "10-fold" I suggest that you merely traded one problem (deadlocking due probably to poor coding) for a bigger problem. That's not the kind of "solution" I would ever accept. You also said that using ROWLOCK was part of the solution, but NOLOCK and ROWLOCK are totally different. NOLOCK is a means to eliminate transactional consistency, ROWLOCK merely governs lock granularity while leaving consistency intact. Incidentally row level locking is the default behavior for both 2000 and 2005, so I have to wonder how much of an effect that actually had.
I couldn't agree more with your last statement though, but the answer is "Almost always."
/*****************
If most people are not willing to see the difficulty, this is mainly because, consciously or unconsciously, they assume that it will be they who will settle these questions for the others, and because they are convinced of their own capacity to do this. -Friedrich August von Hayek
*****************/
January 26, 2007 at 4:16 pm
My point is that I made sure everyone that polls data is reading with NOLOCK and all my batch transactions are protected with ROWLOCK. We have a pretty big MSO and there's alot of shared data. Little tweaks like this go a long long way.
We should agree that a disclaimer should be stated in the article and that's that.
Thank you for reading and responding to my post.
Viewing 15 posts - 31 through 45 (of 88 total)
You must be logged in to reply to this topic. Login to reply