October 19, 2007 at 8:30 am
One of our developers has asked me how to go about disabling page locks. He is being chased by a client who is complaining about the amount of blocking that goes on in our SQL 2000 based application. I've advised him that he can either use hints or sp_indexoption but that on the whole it's better to let the SQL Server work out the level of locking to apply. In particular, using sp_indexoption to prevent page locking could simply end up with more table locks being applied.
Am I correct in believing that on the whole it is better to let the SQL Server work out how to apply locks?
October 19, 2007 at 9:05 am
Ian Harris (10/19/2007)
Am I correct in believing that on the whole it is better to let the SQL Server work out how to apply locks?
In my opinion, 100% correct. My point of view: if you have too much blocking, check your queries/indexes and don't tell the server how to do its job
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 19, 2007 at 9:50 am
Thanks Gail, good to have confirmation
October 19, 2007 at 9:55 am
Page locks are ok, row locks better, table locks worse.
Blocking means that he's got transactions taking too long. You want to lower the amount of time locks are being held, not remove them.
October 20, 2007 at 11:19 pm
You should probably investigate whether the queries can be optimised to reduce page locking. Also look at whether adding or modifying indexes could help. In this area, I think SQL has the best answer in almost all cases.
October 21, 2007 at 11:26 am
Just to support what's already been stated... I agree... "too much" blocking is normally caused by long winded transactions or other code that simply takes too long... indexes may help, but, as some of the others have stated, the real problem will likely be in the design of the code. And even if you tell T-SQL to favor RowLocks instead of PageLocks, it will, many times, override those suggestions especially on UPDATEs.
Be especially aware of transactions that have the following general form...
BEGIN TRANSACTION
UPDATE sometable
SELECT fromsametable
... maybe do something else ...
COMMIT
--Jeff Moden
Change is inevitable... Change for the better is not.
October 21, 2007 at 9:48 pm
I've got to chime in here, the locks question is entirely dependent on what type of activity is causing the locking. Though the poster didn't tell us why locks are such a problem in his environment, there are environments where locking is not only unnecessary but a bad thing (e.g. a reporting only server) in which case using (nolock) or other hints can be a good idea - if you don't expect the data to change why lock at all? Ideally a reporting database could/should be in read-only mode preventing locking altogether.
October 22, 2007 at 2:48 am
Joe Clifford (10/21/2007)
I've got to chime in here, the locks question is entirely dependent on what type of activity is causing the locking. Though the poster didn't tell us why locks are such a problem in his environment, there are environments where locking is not only unnecessary but a bad thing (e.g. a reporting only server) in which case using (nolock) or other hints can be a good idea - if you don't expect the data to change why lock at all? Ideally a reporting database could/should be in read-only mode preventing locking altogether.
SELECT statements applied shared locks.
You may have hundreds of queries returning values from the same row/page/table, none of them will block another.
(NOLOCK) will speed up queries a little, but it would not address the problem.
Problem appears when you've got INSERTs and UPDATEs.
Especially UPDATEs.
Because UPDATE statement will lock all resources affected in it, and it's gonna be exclusive lock.
This behaviour makes perfect sense to me, and I would not suggest to use (NOLOCK) unless your customer gave you a written agreement to receive non-consistent results time to time.
I believe, OP should advice that developer not to use statements like
[font="Courier New"]UPDATE SomeTable
SET Status = @NewStatus[/font]
too much and normalize his database a little bit.
_____________
Code for TallyGenerator
December 17, 2007 at 9:58 am
Hi I am having this problem and I see you are advising against queries like
insert newtable
select * from oldtable where date > 3months
(about a years worth thought it be quickest to take out the last 3 months and rename tables)
My client wants me to archive of a load of records about 2 mil. I have told him it will take hours and severely hit performance. its a 24/7 server so we can not do it.
My suggestion was to do a little bit every night, but they want it all now. Can anyone think of a solution to my problem?
December 17, 2007 at 1:53 pm
DTS. Fast load and limit the number of rows in each batch.
If possible, switch to bulk-logged mode before and switch back to full (and take a full DB backup) once its finished. Just be aware that you won't be able to do point-in-time restores while in bulk-logged mode.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
December 18, 2007 at 8:04 am
In addition to refactoring your code for tighter transactions and more efficient queries and also improving indexing, make sure you are regularly updating statistics. The locking types are chosen (at least initially) based on the optimizer's estimates of the number of rows affected - which comes from stats.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
Viewing 11 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic. Login to reply