February 13, 2020 at 3:56 pm
Hi ,
I am facing a deadlock while i am trying to insert data into a table. At the same time we have app users working on the application , which inserts data in our table.
SO, in order to avoid deadlock , can i use table lock? if so , which table lock to use and how?
I have insert statement within the transaction something like below. Can i use hold lock hint ? will that work? i am not familiar using this lock . Please suggest
BEGIN TRY
BEGIN TRANSACTION t_Transaction
INSERT INTO LargeTable
SELECT *
FROM viewLargeView
WITH (HOLDLOCK)
COMMIT TRANSACTION t_Transaction
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION t_Transaction
END CATCH
February 13, 2020 at 4:04 pm
A table lock is different from a deadlock. Which one do you mean?
The absence of evidence is not evidence of absence
- Martin Rees
The absence of consumable DDL, sample data and desired results is, however, evidence of the absence of my response
- Phil Parkin
February 13, 2020 at 4:26 pm
So you are truncating a large table and then inserting a large amount of data into it, and all the time other users are trying to use the table? Do the users who are inserting data into the table know that you are clearing that data out?
Without understanding the architecture of your application, my best suggestions are:
John
February 14, 2020 at 8:54 am
my 2 cents on this - why are you truncating a table and replacing it with the view data. - why not just reference the view?
MVDBA
February 14, 2020 at 2:10 pm
If you lock it, no one can access it. That will stop the deadlocks. But then, the phone calls start "WHY CAN'T I GET MY DATA".
I'm with Mike. What is this process satisfying?
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
February 14, 2020 at 2:32 pm
an alternative (if you can't use the view because it points to a linked server) is to use transactional replication and update the table in small chunks, rather than a big one time hit
but Grant hit the nail on the head... why are you doing this?
MVDBA
February 14, 2020 at 3:49 pm
You can do this with no chance of deadlocks nor even blocking AND the total "offline" time will be measured in milliseconds.
You're replacing the entire table each time. So... create two tables. For example, if your table was originally created as "SomeTable", drop that and create two tables... one called "SomeTableA" and another called "SomeTableB". Populate SomeTableA as you normally would and then create a SYNONYM called "SomeTable" (which was the name of the original table" and point it at the now populated "SomeTableA".
When you need to update the table with new data, populate "SomeTableB". You don't need a transaction for this. If you do it right, it can even be a minimally logged evolution. Once you've verified that "SomeTableB" has been correctly populated, drop the "SomeTable" synonym and immediately rebuild it but pointing to the most recent table, which is "SomeTableB". You can then truncate the now unused "SomeTableA". Total downtime will be measured in milliseconds.
When it comes time for the next update, simply reverse the process. Populate the currently empty "SomeTableA", repoint the synonym as previous described, then truncate "SomeTableB".
Wash, rinse, repeat as time wears on.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply