June 14, 2011 at 5:52 am
I read recently some bad results that can be caused by using nolock. I have quite often used nolock in sp's for a couple of scenarios.
The first scenario is reading a field or two from a master table in the application such as a customer table. I read analysis information such as business type, region or sales rep etc. These fields are fairly static (might change for 1 or 2 customers in a year), however there are fields on the table that get updated regularly throughout the day (up to 1000 times) so I don't want to block any transactions. If a user tries to change an analysis field it us unlikely to get rolled back, but if it did error they would get the message and try again. Dirty writes therefore shouldn't be anything to worry about, apart from that is there anything that can be an issue with using nolock in this scenario.
The second scenario is reading sales history details. This table holds approx 10,000,000 records. The records can be updated or inserted into. The table has a unique index of invoice and line number. Different invoice types have different sequence numbers so an insert may be in the middle of an index page, not just at the end. Since nolock can have an issue with page splits and dirty writes giving spurious results, what is the best way of not locking the data but ensuring the result is accurate - ignoring non committed data is preferred. Is snapshot isolation read committed the answer - if so is there any overhead issue with using it compared to nolock - there could be anywhere upto 100 users running the query at a time and it sounds to me like this creates a snapshot of the data in tempdb which would be a lot if gigs of data.
June 14, 2011 at 5:54 am
Yes enable snapshot isolation.
The only to find out the cost is to test. Assuming the server is not near is max output you "should" be fine. Can't know untill you turn it on and start using it in some reports.
June 14, 2011 at 6:09 am
The load on tempdb absolutely increases. You have to be sure that you have that configured correctly and that you can support more space in tempdb. Other than that, it works extremely well and avoids all the issues that nolock creates.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
June 14, 2011 at 6:14 am
Grant Fritchey (6/14/2011)
The load on tempdb absolutely increases. You have to be sure that you have that configured correctly and that you can support more space in tempdb. Other than that, it works extremely well and avoids all the issues that nolock creates.
Go any links on those settings? I'd love to read that one.
June 14, 2011 at 6:22 am
Multiple files, separate physical drives, large enough to handle max load (*1.5 if you're paranoid)
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
June 14, 2011 at 6:24 am
And you get the max load info from?
June 14, 2011 at 6:32 am
Watching it. 😛
In an ideal world, from a load test on a Test Server, but we all know how likely that is.
To expand, monitor TempDB and see how large it grows, then change the default file sizes appropriately. Repeat for as long as you like or until it doesn't grow further in normal operation. If TempDB reaches 50 GB within a day, it's not a good idea to have the default starting size of the data files 5 MB. Resizing so that the DB is 60 GB is far more sensible.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
June 14, 2011 at 6:35 am
Thanks for the input
The server is getting close to the max, it has 24 cores which never drop below 60% use, it has 32 gig of ram with an 80 gig DB, so not bad, but not great. The main issue is the hard disks. It only has 6 disks in two arrays. The usage is 80% read and 20% write so the disks writes for tempdb could be an issue.
The customer wouldn't let us test on the production server and the test server us an even lower spec.
Would multiple files for tempdb help at all if they're on the same physical discs? I've seen so many conflicting arguments for multiple files for tempdb so am completely confused as to the best approach for that.
June 14, 2011 at 6:37 am
So since you're on the careful side you'd use 75 GB?
Now how hard was it to provide that info upfront :hehe:??????
June 14, 2011 at 6:49 am
The big question on tempdb usage in snapshot isolation isn't the total size of the database, it's the frequency and magnititude of page writing.
Rows have to be versioned in tempdb for snapshot isolation to work. If you have a LOT of updates going on, and they are on large amounts over data spread of lots of pages, you'll hammer tempdb pretty hard. If you have frequent small updates, usually on single rows, then tempdb won't be hit as hard. And so on.
The big question is how many update operations you have at any one time. Include inserts and deletes in that calculation. Then the next question is are those operations usually on single rows, or are they on ranges of rows (where you might have multiple rows on the same page being updated at the same time), or are they on dispersed patterns of rows spread out over many pages?
That's the key question on snapshot isolation.
(Edit to fix a typo.)
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
June 14, 2011 at 6:51 am
Steve007 (6/14/2011)
Would multiple files for tempdb help at all if they're on the same physical discs? I've seen so many conflicting arguments for multiple files for tempdb so am completely confused as to the best approach for that.
Depends. If you're hitting contention on the allocation pages, yes; if you're hitting IO contention, no.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
June 14, 2011 at 6:55 am
Grant
The load on tempdb absolutely increases. You have to be sure that you have that configured correctly and that you can support more space in tempdb. Other than that, it works extremely well and avoids all the issues that nolock creates.
Additionally, because of the writing to tempdb I have found that one can have a significant slowdown on writing to the database.
You can only tell if you are going to have this issue is to test completely what you are looking to do and see what the impact to the users will be.
Steve Jimmo
Sr DBA
“If we ever forget that we are One Nation Under God, then we will be a Nation gone under." - Ronald Reagan
June 14, 2011 at 7:04 am
Gail & Gus have largely addressed everything. Only thing I'll add is if you want details on configuring tempdb go by Paul Randal's blog sqlskills.com. He has several excellent articles on the details of how to squeeze the most from tempdb, and that is the key for snapshot isolation.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
Viewing 13 posts - 1 through 12 (of 12 total)
You must be logged in to reply to this topic. Login to reply