June 14, 2010 at 9:10 am
the time taken for 10 connections to acquire a lock and update 10 (different) records in parallel (for example using with (rowlock))
can be *longer* than the time taken when serializing the 10 updates (for example using with (tablockx))
I think this is because the time taken to acquire the locks is *greater* than the time taken to update the record
in other words: serialized access to a table *can* result in better bandwidth *and* latency
because the time it takes to acquire TABLOCKX is virtually zero
if applications are designed to keep lock durations as low as possible - this approach seems to pay dividends
what do others think?
June 14, 2010 at 9:33 am
I think that you ought to try letting SQL Server manage the locks for you rather than trying to control it using lock hints.
Without more details on what was being tested, it's hard to say what's going on precisely, but I will say that Oracle and SQL Server primarily use row locking for a reason. I find it hard to believe that you'll get better performance overalll (with exceptions) forcing a table lock.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
June 14, 2010 at 9:47 am
Forcing exclusive table locks may (and I say may) result in lower latency, but it's going to really suck for concurrency. There's a reason that, in general, you want the most granular locks for the shortest time possible.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
June 14, 2010 at 1:01 pm
I believe the locking system in SQL Server
was designed for old fashioned applications
that opened and held long lived connections and locks
they *had* to use high resolution locks or they wouldn't work
but modern applications "dip" into the database as fast as possible
and use application level / disconnected locking
this means the time taken to acquire a lock becomes a significant percentage of the total time taken
in my (no doubt limited) experience tablock/tablockx gives a large overall capacity boost
on systems with 10-200 requests/sec
BUT the applications have to be designed to keep lock durations to the minimum
I guess the point I am trying to make is:
don't assume that high resolution locks will result in better overall concurrency
test the assumption and you may be surprised
June 14, 2010 at 1:23 pm
Depending on the situation, I believe that table locks could perform better, whether or not you should use them is a second discussion.
But in general, your beliefs about how SQL Server was designed are not accurate. It was designed explicitly around the concept of short duration transactions. That's the reason it's been such a #1 caution for so long.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
June 14, 2010 at 1:48 pm
doobya (6/14/2010)
I believe the locking system in SQL Serverwas designed for old fashioned applications
that opened and held long lived connections and locks
they *had* to use high resolution locks or they wouldn't work
You may be interested to learn that later versions of SQL support more granular locks than older versions did. That's one reason why 'hotspots' were such a problem on SQL 6.5 and far less of a problem on SQL 7 and later.
but modern applications "dip" into the database as fast as possible
and use application level / disconnected locking
Maybe in a very small subset of applications, but that's not necessarily the norm. Most apps that I work with rely mostly if not solely on database locking with perhaps some optimistic concurrency checks in the app, and very few run such short queries that the time to get a lock is noticeable.
In fact, many apps that I work with have blocking problems, queries waiting to be able to acquire locks. Much is probably due to badly written queries and inadequate indexing, but even with all of that fixed, there's no way the queries would run fast enough that forcing a table lock wouldn't cause severe blocking.
don't assume that high resolution locks will result in better overall concurrency
test the assumption and you may be surprised
Nothing wrong with testing, but beware of drawing broad conclusions based on limited tests.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
June 16, 2010 at 7:53 am
First of all, there is no such thing as a NOLOCK in combination with UPDATE, INSERT or DELETE.
Using NOLOCK or READUNCOMMITTED isolation in combination with UPDATE, INSERT or DELETE will result in the error
[font="Courier New"]The NOLOCK and READUNCOMMITTED lock hints are not allowed for target tables of INSERT, UPDATE or DELETE statements.[/font]
My opinion on this matter is that very few applications will benefit from using only table locks. I agree that explicitly requesting a table lock will require less locking overhead, but concurrency goes down the drain. If you know that all INSERT, UPDATE, DELETE, SELECT requests to the database is serialized, meaning that there is only one transaction at any given time, then I can agree (to some extent) that you *might* see a *slight* performance gain. Or if your application is a SELECT only application, I can agree that using one shared table lock instead of ten shared row locks and intent locks on page and table objects would require less resources.
I don't rule out your theory, I'm just thinking that for each application that would benefit from only table lock, there are 1000 application that would not.
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply