February 17, 2012 at 11:51 am
Is it possible to insert a large number of records into a fairly big table, and not have SQL Server lock the entire table against updates or other smaller inserts?
This is a table with about 2 million records, and the large insert can add 100 or 200 thousand in a single shot. Meanwhile, any queries against the table, and also single-record inserts or updates are blocked.
Is there a good strategy either to (a) reduce or eliminate the blocking of other processes, or (b) shorten the time it takes to insert the records, which could minimize the blocking issue?
thanks!
February 17, 2012 at 12:00 pm
dbapt605 (2/17/2012)
Is it possible to insert a large number of records into a fairly big table, and not have SQL Server lock the entire table against updates or other smaller inserts?This is a table with about 2 million records, and the large insert can add 100 or 200 thousand in a single shot. Meanwhile, any queries against the table, and also single-record inserts or updates are blocked.
Is there a good strategy either to (a) reduce or eliminate the blocking of other processes, or (b) shorten the time it takes to insert the records, which could minimize the blocking issue?
thanks!
Both are OK. if any process hold the lock then other have to wait. and live locks are called blocks.
February 20, 2012 at 3:49 am
It partly depends on your table structure. What's the cluster key? If you're looking at a heap, or if the cluster key distributes the data across the whole database, then the more and longer the locks, the more pain you're looking at. If, on the other hand, you have a clustered key that moves the inserts out to new pages all the time, like an identity column, you should see little contention between the inserts and the reads.
If you really are seeing that much contention, then you need to reduce the length of the insert transaction. Best thing to do would be to break it up into smaller chunks. It'll take more overall time, but shorter transaction times means less contention.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
February 20, 2012 at 3:52 am
Have you considered one of the snapshot isolation levels?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
February 21, 2012 at 5:59 am
snapshot isolation as gail mentioned may be one help. I believe there is a traceflag to prevent lock escalation but that has some NASTY side-ffects. Best would be to reduce the batch sizes of the inserts to keep table locking from happening and/or schedule the inserts in low use times if such exist.
Oh, is the table overindexes? Are there unindexed foreign keys that are having to work too hard to get validated? are there (inefficient) triggers on the table receiving the inserts?
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
February 22, 2012 at 9:01 am
dbapt605 (2/17/2012)
Is it possible to insert a large number of records into a fairly big table, and not have SQL Server lock the entire table against updates or other smaller inserts?This is a table with about 2 million records, and the large insert can add 100 or 200 thousand in a single shot. Meanwhile, any queries against the table, and also single-record inserts or updates are blocked.
Is there a good strategy either to (a) reduce or eliminate the blocking of other processes, or (b) shorten the time it takes to insert the records, which could minimize the blocking issue?
thanks!
If your table has a clustered index with an incrementing key, then the new rows will generally be inserted in new pages. What method are you using to perform the inserts, are you bulk inserting?
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
February 24, 2012 at 5:19 am
The clustered index is an identity key, so in theory it should be adding pages at the end. However, it still seems to cause locking issues.
The insert is a basic INSERT(col, col, col...) SELECT (col, col, col...) from a join of a few other tables.
I figured the most straightforward solution was to break up the insert into smaller chunks, although that will take a little thought on how to accomplish that.
thanks!
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply