April 6, 2012 at 3:44 am
Hey,
We are planning to update a table, which has 150 million rows, on our production environment. The update will affect 25 million rows. The plan is to collect the primary key of the rows to be affected into another physical table, and then perform a batchwise update on the live table. This will be done during non peak hours.
We are a little hesitant on deciding the batch size, as we fear lock escalation to table level. I am aware that lock escalation is a process of converting many fine-grain locks, such as a row, into fewer coarse-grain locks, such as a table.
So the question here is, can we relate this mechanism to rows and decide a batch size?
Any help is much appreciated.
PS: Using SQL Server 2008 R2 on a 128 GB RAM server with 32 Processors.
Regards - Yasub
April 6, 2012 at 12:29 pm
Threshold for lock escalation is 5,000 locks acquired for the same referenced table, table partition or index. Each time SQL Server engine finds that 5,000 locks have been acquired it attempts to escalate to the upper level.
_____________________________________
Pablo (Paul) Berzukov
Author of Understanding Database Administration available at Amazon and other bookstores.
Disclaimer: Advice is provided to the best of my knowledge but no implicit or explicit warranties are provided. Since the advisor explicitly encourages testing any and all suggestions on a test non-production environment advisor should not held liable or responsible for any actions taken based on the given advice.Viewing 2 posts - 1 through 1 (of 1 total)
You must be logged in to reply to this topic. Login to reply