January 26, 2007 at 4:32 pm
Kevin,
In the following situation where would you use NOLOCK?
Simon Sabin
SQL Server MVP
http://sqlblogcasts.com/blogs/simons
January 26, 2007 at 5:46 pm
I agree that making readers aware that a topic has a lot of discussion would be interesting, and emailed Steve already suggesting that maybe if a new article sparks a good debate he could rerun it in the next daily NL flagged somehow. Wouldnt hurt if a few more emailed him!
Simon, you're right that not everyone reads the comments and that's both a shame and a loss. The hard part with the current model is the discussion probably doesnt get going until well into the day, perhaps long after the novice reader has read the article. The suggestion above might help some, the other option would be to try to get the resident curmudgeons (Don, and ...?) to read new stuff a day in advance and try to get some early comments if it needs a big red warning label.
On with the discussion!
January 27, 2007 at 7:36 am
The other aspect is that you are paying for articles. Which indicates some element of professionalism. I think the 1 day review would be good, just as a yes/no option. Doing the whole technical review and edit isn't what you are about and I don't think it should be. The style can be rough in but the content should always be accurate and best practice.
I'll ping Steve to support, and I would be happy to review content before it goes live.
Simon Sabin
SQL Server MVP
http://sqlblogcasts.com/blogs/simons
January 28, 2007 at 8:08 am
This is a great article. I think the author did a reasonable test and drew reasonable conclusions. It is not a "dangerous" article by any means. Any IT Professional knows to take the information they find off the net with a grain of salt. Even MS posts stuff that may not be correct. You should use this article and the great amt of comments it drew to serve as a starting point for further exploration and discovery about the issue. To say that this article is going to cause everyone to go out and add nolocks to their queries is a bit over the top.
In the interest of full disclosure I run several very busy and large databases that use NOLOCK exclusively. It is in our company coding standards to place nolock on all select queries. Now I agree that a properly designed db will most likely not need to have nolocks added to queries that run in it however how many of us get to work with a properly designed dbs? Probably not that many. So nolock may be a good tool to use in certain situations. If your database is largely read centric then NOLOCK may not cause much harm. It certainly has not been that much of an issue in our environment and we have been using it in 90% of our sprocs for 6 yrs.
January 29, 2007 at 7:52 am
Is this a test?
1. NOLOCK - I know there may debate to this, but most the time populating forms or reports is not that sensitive to completing transactions. This is an executive decision and basis-to-basis situation.
2. I am unsure the effect of specifying locking levels on the actual UPDATE statement, but when there is business logic involved and/or triggered processes i.e. UPDATING using other tables, you want to use ROWLOCK as much as possible. I tend to use NOLOCK on large semi-static tables, or if I am simply searching an index for existing record(s) I didn't previously insert in that transaction.
3. Almost always NOLOCK. Actually, this reminds me of something I caught a colleague on. He was polling our main table for items to send to a clearing house. He did this every .5 seconds. I was deadlocking all over the place. Granted, some of this had to do with the fact he was searching an un-indexed status field, but this was a perfect application for NOLOCK.
January 29, 2007 at 4:47 pm
If you use NOLOCK in the first one, the form is filled with uncommitted data, however if this is edited and sent back to the database you are commiting data that might have been rolled back. In addition you might have overwritten data that had previously been written. A similar mechanism applies to 2. how do you know what has changed, if you reread the data you should make sure the data is consistent and not use NOLOCK.
With point 3, using nolock on batch reads you can easily read the same data twice and thus send emails etc twice, how does that look to your customer. If this is an order confirmation, does the user then think the order has been placed twice. More importantly, you are reading dirty data, what happens if in the order example an order has been partly saved and then failed. If your process uses NOLOCK then you could end up sending an email even though the order failed.
Too many people use NOLOCK without understanding what the impact can be on the business. Its not just financial systems that need to consider transactionality.
You can use NOLOCK but your are bypassing all the built in transactionality of SQL and so to maintain consistency you will have to do something your self, which makes it more complex.
Simon Sabin
SQL Server MVP
http://sqlblogcasts.com/blogs/simons
January 29, 2007 at 5:45 pm
as the former DBA (admin/architect) for a fairly high-traffic website, I can say that the NOLOCK / ...READ UNCOMMITTED hints proved to be invaluable. By high-traffic, I mean > 1 million pageviews/day with many users hitting tables as large as 150,000,000 rows.
Even with indexes created & tuned to maximize response times, NOLOCKs still made a huge impact. I was always a bit frustrated at the fact that I needed to add these hints to so many statements... and I remember going on many a witch-hunt looking for the source of blocking. To the guy that says "BLOCKING is GOOD"... no, blocking is a necessary side-effect caused by performance limitations and data integrity requirements.
Granted, we had to handle financial reporting differently, but for generic content that was stored in SQL tables, the benefits outweighed the problems.
January 30, 2007 at 3:23 am
chris - I'm not having a go here but -- I sometimes tire a little of posts that attempt to "throw" large numbers about - assuming your million page hits each required a transaction that's an average of 11 transactions/sec which is chicken feed to a sql database - at that level there shouldn't be issues with locking due to transactions ( really )
Problems with 150 million rows ? then partition, sorry but design and architecture are a critical part of operations - my view , in a general sort of sense, is that the underlying design/architecture is probably wrong which is why the need to use nolock - I will test more some time but I have not been able to see any time difference in running a query ( as a solitary operation ) with or without a nolock hint.
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
January 30, 2007 at 8:12 am
Of course I dont bind myself to 1, 2, 3. If I did, I'd be out a job. If you're really concerned about data integrity, use server-side cursor recordsets bound to the table. Between the time a user does a select, edits the data, and then sends the update, a batch process or another user may have edited the data. Like I implied before, think it through before you design a process. What are the trade-offs? What are the benefits?
January 30, 2007 at 8:30 am
This would be such a great discussion to have in person;-)
I think locking should be considered on a statement by statement (or least proc) basis with a good understanding of the risks and benefits. Even with the BEST table design, shouldn't it be part of our mandate to use the fewest number of resources that will get the job done to the standard that is required? If locking is good, shouldnt we be all using serializable so that absolutely nothing can go wrong? Read committed really is a pretty good default setting, but I wonder if it doesn't make us lazy - we don't have to push the developer into having the conversation about the appropriate locking level, or to have the conversation with the real stakeholders about the advantages/disadvantages of not locking.
The second part, is let's say Chris really does have a bad table design that hurts his scalability. When you look at the options (NOLOCK, table redesign, more hardware) and combine with a risk analysis, I think NOLOCK seems pretty pragmatic. Should you fix the table design? That's entirely a business decision, not a how can I make my tables better question.
January 30, 2007 at 8:42 am
WINNER! WINNER! CHICKEN DINNER!
January 30, 2007 at 8:46 am
I have to say I've greatly enjoyed a number of the discussions of late .. the subject matter may be diverse but it's certainly been exercising the old brain!! This is one of the best points of SQL Central, in my opinion, that the discussions are of such high quality. It's not something I've particularly found elsewhere.
It's my best learning platform, face to face would be good too - but written gives the chance to take it in and you have to think ( hopefully ) about what you write before you post it. < grin >
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
January 30, 2007 at 8:52 am
I could not agree with you more, Colin. Forums (like this one) and newsgroups have been of great benefit to me over the years. I have a few more years than most. Not only is it good for learning but for some of us old geezers it gives us a chance to give back to the industry.
ATBCharles Kincaid
January 30, 2007 at 8:58 am
Agreed, this is the kind of conversation that is really great to have at the PASS Summit...
I'll also agree with the basic premise of your post, we should get the job done with the fewest possible resources required to do it properly. "As simple as possible, but not simpler." However, based on some of the responses here, and my personal experience, it is apparent that many, if not most people who discover the wonders of NOLOCK don't really have a clue about it's ramifications, beyond "Hey! This is great! It makes my queries run faster!"
I have used NOLOCK before, I'm not saying, and I don't believe anyone else is saying that NOLOCK is evil. It is a tool in my toolbox, but like any other tool, it can be misused. Herein lies the danger that I see. A little knowledge is a dangerous thing. Why? Because if you give someone the NOLOCK hammer without the proper knowledge of both the benefits and costs, pretty soon all the world looks like a nail... I've seen it many times (not just with NOLOCK) and I've been guilty of it too.
/*****************
If most people are not willing to see the difficulty, this is mainly because, consciously or unconsciously, they assume that it will be they who will settle these questions for the others, and because they are convinced of their own capacity to do this. -Friedrich August von Hayek
*****************/
January 30, 2007 at 10:04 am
no problem Colin, I didn't give a whole lot of detail in that post. If each webpage had just a single database query behind it, that would have been nice. I believe the average was closer to 6/page, and our users certainly weren't kind enough to spread their activity evenly across a 24-hour day.
The application involving 150 million rows actually performed quite well all things considered: 2 primary databases totaling about 50GB in size sharing an instance of SQL Standard edition (probably the most significant limitation IMO) on a dual 2.4ghz xeon server. Having the extra memory support of Enterprise edition would have been ideal, but it was just too expensive. This was one reason I was anxious to give the 64bit flavor of SQL2005 Standard a shot... I was just happy I was able to spread the data, logs, indexes and tempdb across seperate physical volumes.
Partitioning the data wasn't really a good option in this case... it's not like I had "old" records that I could segment off into their own tables/filegroups. Generally, a given user had between one and several dozen groups of items, each group typically consisting of 200 to 5000 records. This large table was also joined against a smaller 4 mil. record table of static data (well, it only changed once/day). The records in this smaller table were common between many users, so there was quite a bit of overlap in the selects. This is one case where NOLOCKs makes quite a bit of sense, and there were many other similar scenarios on the website where a given table was 99.9% read-only. Of course where we could, we cached data directly in the web application itself.
Personally, I never observed a significant difference in performance on a single query with a NOLOCK, it was just the cases with many concurrent ones.
Viewing 15 posts - 46 through 60 (of 88 total)
You must be logged in to reply to this topic. Login to reply