October 11, 2011 at 12:19 pm
Hi,
Can someone give examples where using NOLOCK or READUNCOMMITED hints helped with performance when using with SELECTs?
Thanks
October 11, 2011 at 12:26 pm
Dirty reads are not the best approach to improving performance. http://sqlblog.com/blogs/andrew_kelly/archive/2009/04/10/how-dirty-are-your-reads.aspx
Typically the desire to use dirty reads in the guise of performance enhancement is trying to deal with the symptom and not the cause. Identifying why you have performance problems due to table locking should come long before using dirty reads.
_______________________________________________________________
Need help? Help us help you.
Read the article at http://www.sqlservercentral.com/articles/Best+Practices/61537/ for best practices on asking questions.
Need to split a string? Try Jeff Modens splitter http://www.sqlservercentral.com/articles/Tally+Table/72993/.
Cross Tabs and Pivots, Part 1 β Converting Rows to Columns - http://www.sqlservercentral.com/articles/T-SQL/63681/
Cross Tabs and Pivots, Part 2 - Dynamic Cross Tabs - http://www.sqlservercentral.com/articles/Crosstab/65048/
Understanding and Using APPLY (Part 1) - http://www.sqlservercentral.com/articles/APPLY/69953/
Understanding and Using APPLY (Part 2) - http://www.sqlservercentral.com/articles/APPLY/69954/
October 11, 2011 at 12:27 pm
There's only one scenario I can think of that using NOLOCK is 'safe'. A warehouse that has once a day updating and nothing is run during the update times, and there's concerns about switching back and forth in read only mode.
Start at this blog for some understanding of the dangers:
http://blogs.msdn.com/b/davidlean/archive/2009/04/06/sql-server-nolock-hint-other-poor-ideas.aspx
Then follow the links he presents.
NOLOCK is more then just a dirty read. It lets data move around underneath it, pages get split and re-read, and a thousand other things.
Now, safety aside, there are considerations to use READ UNCOMMITTED, but you need to understand exactly how many things that can go wrong you're going to have to not care about when choosing to use it. Even 1 in 100,000 times having inaccurate data can be considered unacceptable depending on the circumstances. NOLOCK could be much higher, but that's the random number I use when explaining it. "1 in every 100,000 orders is going to be WRONG. Is this acceptable across the system?"
Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.
For better assistance in answering your questions[/url] | Forum Netiquette
For index/tuning help, follow these directions.[/url] |Tally Tables[/url]
Twitter: @AnyWayDBA
October 11, 2011 at 12:30 pm
IT depends... lol! If you can afford to read data that may not be commited yet, in favor of not locking the table and thus improving performance, then using it will not hurt you. I've used this alot on joins with tables that may be fairly static (a lookup table for example).
However I've seen some systems where they cannot afford to to read data that may not be commited. In this senario, your best bet is to write smaller, faster queries using more targeted WHERE statements.
October 11, 2011 at 12:35 pm
SQL Stud (10/11/2011)
IT depends... lol! If you can afford to read data that may not be commited yet, in favor of not locking the table and thus improving performance, then using it will not hurt you. I've used this alot on joins with tables that may be fairly static (a lookup table for example).
However I've seen some systems where they cannot afford to to read data that may not be commited. In this senario, your best bet is to write smaller, faster queries using more targeted WHERE statements.
Lookup tables are actually your least concern when it comes to NOLOCK. You could as easily just TABLOCK if it's a large lookup table and avoid the massive lock grab. Nearly static lookup tables have a thousand shared locks but no blocking locks. The only performance gain there is the removal of lock establishment. Just avoid the TABLOCKX (Exclusive) and yo're good to go.
READPAST is another alternative to help avoid blocking scenarios but not end up fighting with transient data. Nothing is as effective as serialized but that's overkill in most scenarios.
A third alternative is Snapshot isolation, if you can deal with the tempdb requirements.
Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.
For better assistance in answering your questions[/url] | Forum Netiquette
For index/tuning help, follow these directions.[/url] |Tally Tables[/url]
Twitter: @AnyWayDBA
October 11, 2011 at 1:55 pm
Evil Kraig F (10/11/2011)
SQL Stud (10/11/2011)
IT depends... lol! If you can afford to read data that may not be commited yet, in favor of not locking the table and thus improving performance, then using it will not hurt you. I've used this alot on joins with tables that may be fairly static (a lookup table for example).
However I've seen some systems where they cannot afford to to read data that may not be commited. In this senario, your best bet is to write smaller, faster queries using more targeted WHERE statements.
Lookup tables are actually your least concern when it comes to NOLOCK. You could as easily just TABLOCK if it's a large lookup table and avoid the massive lock grab. Nearly static lookup tables have a thousand shared locks but no blocking locks. The only performance gain there is the removal of lock establishment. Just avoid the TABLOCKX (Exclusive) and yo're good to go.
READPAST is another alternative to help avoid blocking scenarios but not end up fighting with transient data. Nothing is as effective as serialized but that's overkill in most scenarios.
A third alternative is Snapshot isolation, if you can deal with the tempdb requirements.
Evil Kraig - so do you get any performance gains from NOLOCK while using Snapshot isolation? Can you overwrite Snapshot with ROWLOCK or PAGELOCK?
October 11, 2011 at 2:38 pm
Lexa (10/11/2011)
Evil Kraig - so do you get any performance gains from NOLOCK while using Snapshot isolation?
Unlikely, and you get all the dirty data problems same as if you weren't using snapshot.
Can you overwrite Snapshot with ROWLOCK or PAGELOCK?
You can, but why would you want to? If you want a query to not run in snapshot you'd more likely just change isolation level for that query or use one of the isolation level hints.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 11, 2011 at 2:52 pm
Can you overwrite Snapshot with ROWLOCK or PAGELOCK?
You can, but why would you want to? If you want a query to not run in snapshot you'd more likely just change isolation level for that query or use one of the isolation level hints.[/quote]
I've seen it done but wasn't sure myself.
October 11, 2011 at 5:26 pm
Lookup tables are actually your least concern when it comes to NOLOCK. You could as easily just TABLOCK if it's a large lookup table and avoid the massive lock grab. Nearly static lookup tables have a thousand shared locks but no blocking locks. The only performance gain there is the removal of lock establishment. Just avoid the TABLOCKX (Exclusive) and yo're good to go.
READPAST is another alternative to help avoid blocking scenarios but not end up fighting with transient data. Nothing is as effective as serialized but that's overkill in most scenarios.
A third alternative is Snapshot isolation, if you can deal with the tempdb requirements.
Your statement is true for "Nearly static lookup tables" but if you have rate tables which change only monthly, quarterly, or yearly, but are hit several thousand times a minute, a NOLOCK can save a production outage when someone trys to sneak in a rate change during the day. I would becareful about making blanket statements such as these.
I would say that NOLOCK does create a performance improvement in most instances because you're telling the query engine not to do a shared lock. That being said, there are times when SQL will ignore your hint. The below statement is one of those examples:
update tblProduct set productCost = productCost * 1.10
from tblProduct p
inner join tblVendor v (NOLOCK) on p.productVendor = v.VendorID
where v.vendorState = 'IN'
Again, the best thing you can do is test it to see what your results will be.
October 12, 2011 at 12:49 am
SQL Stud (10/11/2011)
if you have rate tables which change only monthly, quarterly, or yearly, but are hit several thousand times a minute, a NOLOCK can save a production outage when someone trys to sneak in a rate change during the day. I would becareful about making blanket statements such as these.
If an update to a rate table can cause an outage, I'd seriously be looking at the performance of the queries, because that is a strong indication that something is seriously, seriously wrong.
If there will be updates and you want reads not to be blocked, then use one of the snapshot isolation levels.
I would be very careful about recommending nolock without also explaining what it means to SQL Server. Too many people put it on as a performance enhancement without being told what it does to data integrity.
I would say that NOLOCK does create a performance improvement in most instances because you're telling the query engine not to do a shared lock.
Nolock is NOT a go-faster switch. It's a option that tells SQL to forgo data integrity for the ability not to get blocked by concurrent modifications.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 12, 2011 at 8:52 am
Gila
I dont disagree with you completely. I still maintain there are instances where nolock helps performance. Here is an article I've referenced before:
http://www.sqlservercentral.com/articles/Performance+Tuning/2764/
I look forward to your spotlight session today at PASS
October 12, 2011 at 9:04 am
I've yet to see a single client trade data quality for a little extra performance (never had a lot with correct tuning on the system).
Once you tell them it's certain that at some point they'll get incorrect results (even if by a little). They always say no to nolock.
Of course I'm dealing with ERPs, and not google like search results.
October 12, 2011 at 9:57 am
Ninja's_RGR'us (10/12/2011)
I've yet to see a single client trade data quality for a little extra performance (never had a lot with correct tuning on the system).Once you tell them it's certain that at some point they'll get incorrect results (even if by a little). They always say no to nolock.
Of course I'm dealing with ERPs, and not google like search results.
You dont work here π
It's acceptable in the web based company where I work down to the number of concurrent connections and the reliability using nolock gives that we dont incur online outages due to speedily tested code. We work in an extremely dynamic environment which requires it. It's not implemented for performance sake.
Dont get me wrong, when reading uncommitted data is not acceptable we dont use NOLOCK or READ UNCOMMITTED but as a procedure, this is used for virtually all queries and stored procedures.
October 12, 2011 at 9:58 am
As a generic comment for With (Nolock) + With ( READUNCOMMITED ) , They come up with dirty data (uncommitted ) which sounds like a matter of business acceptance regard + degree of severity of relevant transactions β¦.
So it could be used within the following cases (If business to accept) :
1-Pure select statements
2-The second table within a join used for any DML statement since the first table is going to be a deprecated feature within next SQL Server versions.
3-Select statements from read only files where read tables only are there.
4-Long running Data warehousing/Archiving solutions particularly more depending on Bulk Merge codes of 2008 where data validations could be done there easily and efficiently
5-This is just to avoid impacts on end users
Irrespective that , With (Nolock) + With ( READUNCOMMITED ) could participate in performance enhancements from the base of :
It save little bit time of acquiring locks each time Which could be significant time within the following cases
?Within those queries having much subqueries
?Or heavy overloaded system where
?Or Systems hosted on poor H.W Specs for CPU + I/O +Network I/O subsystem where performance is not accommodated well while beginning architecture
However , you have to pay high cautions while using the other table hints like With (pagelock)+ With (rowlock)+ + With (tablock)+ With (tablockx)+ With (updlock+ With (index (indexname))β¦.
Since they arenβt recommended any more since it enforces query execution plan to go in static level of locks escalation regardless of data entity there which might lead to significant performance degradation
For the best solution , if needed to avoid fast locks + deadlocks as possible without any coding enhancements or locks hints, it is read committed snapshot isolation level using row versioning
But it applies only for read committed transdactions not Repeatable read or Serializable at all but you should take with high consider Tempdb Performance + size
[font="Arial Black"]Performance Guard[/font]
[font="Courier New"]Just ask me to get 0 sec for all queries[/font]
October 12, 2011 at 10:04 am
MysteryJimbo (10/12/2011)
Ninja's_RGR'us (10/12/2011)
I've yet to see a single client trade data quality for a little extra performance (never had a lot with correct tuning on the system).Once you tell them it's certain that at some point they'll get incorrect results (even if by a little). They always say no to nolock.
Of course I'm dealing with ERPs, and not google like search results.
You dont work here π
It's acceptable in the web based company where I work down to the number of concurrent connections and the reliability using nolock gives that we dont incur online outages due to speedily tested code. We work in an extremely dynamic environment which requires it. It's not implemented for performance sake.
Dont get me wrong, when reading uncommitted data is not acceptable we dont use NOLOCK or READ UNCOMMITTED but as a procedure, this is used for virtually all queries and stored procedures.
The same is true of my company. They have several applications caching data for telephony and reports that look at millions of rows where a couple hundred dirty reads is not a problem. However, locks are. Again, if the business case cannot tolerate dirty reads it will be changed. However, 99% of the queries that the applications use actually prefer dirty reads over the possibility of missing a transaction that has not yet been committed.
Take away as usual... It Depends π
Thanks,
Jared
Jared
CE - Microsoft
Viewing 15 posts - 1 through 15 (of 21 total)
You must be logged in to reply to this topic. Login to reply