April 21, 2010 at 4:19 am
I am doing an insertion with a procedure.All the tables inside procedure is heavilly weighted.
Procedure contains following process.
Table A contains 5m data - insertion
Table Aa contains 14m data -- Child of Table A - insertion
Table B contains 6m data -- Updation
Table C contains 13m data -- Updation
Tables A,Aa,B have foreign key relationships
6 users are accessing the procedure concurrently through application
for inserting data.
Some times server is showing Table lock on Table A while doing insertion.At that time no users can do the insertion process.I was forced to run the KILL(SPID) command to release Table lock.
At the begining of procedure, I am using another simple procedure for getting primary key value.
Is this a concurrent user request problem?
Any solution for avoiding the issue?Please help me
Thanks & Regards
Binto Thomas
April 21, 2010 at 7:53 am
This was removed by the editor as SPAM
April 21, 2010 at 8:31 am
In order to give a more useful answer you really need to answer the questions stewartc-708166 submitted. However, I will post some general commentary.
It sounds like you are indeed suffering from blocking. It also sounds like you are dealing with writer/writer blocking, so changing the isolation level will not help. There are three main ways to deal with writer writer blocking.
The first is to tune your T-SQL so that the window of blockage is narrowed.
The second is to avoid table level level locking.
The third is to offload the DML to a staging area for later processesing or conduct the batch dml during off hours
Of these options it doesn't sound like number one will work because your batch process is dealing with millions of records and that is likely going to take a while no matter how well it's tuned. If the third option is available go with that because it is easy. If not, then the second option is left which is difficult to accomplish.
For the second option there are several "hack" ways to disallow table level locking, but the best thing to do is let the optimizer do what it wants and just limit batch sizes to under 5000. Once the batch size goes beyond 5000 the SQL optimizer will choose a table lock versus keylock/rowlock/pagelock. For my larger batches (1million+) I find that 5000 batch sizes are ridiculous. Alternatives are to do larger batch sizes that are tuned such that the application insertions only have to wait a few seconds for the table lock of the current batch to complete.
April 22, 2010 at 3:37 am
Primary Key - int
Clustered index - smallint
Insertion - 1 row to master table
less than 50 rows to child table
affecting less than 5 % of data.
What about statistics updation Or Reindexing?
April 22, 2010 at 3:53 am
This was removed by the editor as SPAM
April 22, 2010 at 7:54 am
I am doing an insertion with a procedure.All the tables inside procedure is heavilly weighted.
Procedure contains following process.
Table A contains 5m data - insertion
Table Aa contains 14m data -- Child of Table A - insertion
Table B contains 6m data -- Updation
Table C contains 13m data -- Updation
Primary Key - int
Clustered index - smallint
Insertion - 1 row to master table
less than 50 rows to child table
affecting less than 5 % of data.
Is your answer for this procedure?
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply