May 2, 2010 at 6:48 am
Hi Experts,
I am working on the performance tuning of a Database of a retailer. While analyzing the Stored Procedures used to generate reports (using SSRS), i found "NOLOCK" can significantly improve the execution times of the SPs.
Now my question is:
What can be the disadvantages of using NOLOCK in scenario described above?
I could gather below mentioned point after going through various blog:
1)Uncommited data can be retrieved-- should not be a problem here as this isnt a financial/accounting firm.
Cheeers,
Ankur (a pety DBA :-))
Cheers
May 2, 2010 at 7:17 am
Well here's another point of view from someone who's not in accounting or financial.
I need to display to customer the current available inventory and the price for their account.
Under no ciscumstances ever can I display possibly wrong data. In rush time, we get as much as 4-8 orders per minute (which means 60 to 120 product searches). That doesn't seem like much but when you have 2000+ clients that can be after the same product at the same time, you need to display the correct numbers, period.
Now reporting is another beast. The real question (that your client needs to answer from himself) is am I ok with having reports accurate to 99.8% rather than 100% all the time... and probably 1 day late on top of it. Once he makes the exercice for all his reports then you'll know what to do.
May 2, 2010 at 9:28 am
http://sqlblog.com/blogs/andrew_kelly/archive/2009/04/10/how-dirty-are-your-reads.aspx
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 3, 2010 at 7:14 am
Since this is asking for opinion...
I think NOLOCK is probably the single most dangerous hint available. It's seductive and attractive and doesn't appear to cause a single problem. But the fact is, not only can you get dirty reads, but you can see missing rows or multiple copies of rows. It's just flat out dangerous. There are situations where it can be used safely, but those situations are fewer than the number of times you see it used.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
May 3, 2010 at 7:27 am
I just tested some SPs and found that after using NOLOCK the execution time of the queries reduced from 35 to 31 secs approx.... Now this is very tempting... :w00t:
But i think cons are overruling the pros... no client will accept marginal inaccuracy.
Thankx for the opinion folks!!
Cheers
May 3, 2010 at 7:28 am
Here's one other side effect
http://sqlblogcasts.com/blogs/sqlandthelike/archive/2010/04/23/increasing-deadlocks-with-nolock.aspx
May 3, 2010 at 7:30 am
ankur003 (5/3/2010)
I just tested some SPs and found that after using NOLOCK the execution time of the queries reduced from 35 to 31 secs approx.... Now this is very tempting... :w00t:
Just out of interest , can you post the queries ?
Im assuming that they can be tuned to give better performance anyway.
May 3, 2010 at 7:34 am
ankur003 (5/3/2010)
I just tested some SPs and found that after using NOLOCK the execution time of the queries reduced from 35 to 31 secs approx.... Now this is very tempting... :w00t:But i think cons are overruling the pros... no client will accept marginal inaccuracy.
Thankx for the opinion folks!!
You're queries dropped by 12%. If you were talking about queries running in under 100ms, that might be a big number. I suspect that you probably have a large number of tuning opportunities before you ever get to needing to considering removing locking to get that last 12% speed improvement. You want to get to 3 times, 10 times, 100 times faster, not 12%.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
May 3, 2010 at 7:34 am
ankur003 (5/3/2010)
I just tested some SPs and found that after using NOLOCK the execution time of the queries reduced from 35 to 31 secs approx....
Means there's likely some blocking going on. There's a fair chance that tuning indexes and/or code will both remove the blocking and speed up the query. 31 seconds is a long time, unless it's a massively complex query on a huge amount of data.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 3, 2010 at 8:02 am
Find the SP attached. The tables which this SP queries are of the order of 6-7 GBs each and have around 5 lakh rows.
PS: I was customizing the reports and found the scope of performance improvement. Any Pointers/suggestion will be welcome.
Cheers
May 3, 2010 at 8:06 am
All this sort of syntax
JDAReceipts.PartnerInvoiceNo = ISNULL(@PartnerInvoiceNo,JDAReceipts.PartnerInvoiceNo)
Will , pretty much guarantee table scans ,
try Gails page here http://sqlinthewild.co.za/index.php/2009/03/19/catch-all-queries/ for more info.
Edit--
Also you have a few scalar udfs , remove those and replace with inline table udf's. See here for why..
http://sqlblogcasts.com/blogs/sqlandthelike/archive/2009/10/15/udf-overhead-a-simple-example.aspx
May 3, 2010 at 8:19 am
Please post table definitions, index definitions and execution plan, as per http://www.sqlservercentral.com/articles/SQLServerCentral/66909/
Why do you have DropCleanBuffers at the beginning? Is that just there for performance testing?
You may also be running into this: http://sqlinthewild.co.za/index.php/2009/09/15/multiple-execution-paths/
Edit: And this, from changing parameters after the beginning of the procedure http://sqlinthewild.co.za/index.php/2008/05/22/parameter-sniffing-pt-3/
If you test each query individually, which ones are the slow ones? Or are they all equally slow?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 3, 2010 at 8:24 am
Also, having four completely different queries inside the IF statement pretty much guarantees you're likely to see recompiles. If you need to set up this kind of IF clause, then you need to create four new seperate stored procedures, each with the appropriate query, and call those procedures from this wrapper proc. You'll get radically less recompiles that way.
You're guaranteed a table scan if you put any function on a column as part of a WHERE clause or JOIN.
Other than, as Gail says, please post the details for more detailed help.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
May 3, 2010 at 8:27 am
Grant Fritchey (5/3/2010)
Also, having four completely different queries inside the IF statement pretty much guarantees you're likely to see recompiles.
Why? SQL won't recompile when a different branch is taken. It'll still use the cached (and probably bad) plan that it generated on the first execution.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 3, 2010 at 8:36 am
GilaMonster (5/3/2010)
Grant Fritchey (5/3/2010)
Also, having four completely different queries inside the IF statement pretty much guarantees you're likely to see recompiles.Why? SQL won't recompile when a different branch is taken. It'll still use the cached (and probably bad) plan that it generated on the first execution.
If different objects are referenced, I'm pretty sure it recompiles, but I could be wrong (wouldn't be the first time... today).
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
Viewing 15 posts - 1 through 15 (of 21 total)
You must be logged in to reply to this topic. Login to reply