December 10, 2007 at 4:01 pm
Hi,
I wrote a procedure which takes few parameter values and insert into a table (table1), before inserting,
I check (select stmt) whether the given parameter value exists or not in table (table1), if not then insert.
When called second time with the same values the check (select stmt) returns true and no insert will be
happened. This scenario works good for single instance, but fails when multiple instances call this
procedure (trying to insert same value) at same time.
Possible errors I get are: Primary_key voilation and some times deadlock.
To overcome these issues: I used sp_getapplock.
My question would be, is there any better approach to this problem.
Thanks in advance.
Sri.
December 10, 2007 at 4:05 pm
December 10, 2007 at 4:10 pm
if not exists(
select geolevel from table_geography
where geolevel = @geolevel)
begin
insert into table_geography values(@geolevel)
end
Thanks,
Sri
December 11, 2007 at 5:50 am
If you're getting primary key violations, something else is up. You may also need to post the structure of the table you're inserting into.
Is that the complete procedure? Is there any other procedures that call the same table and update, insert or delete the data?
When you look at the deadlock errors, assuming you've got trace flag 1204 or 1222, which object specifically is causing the deadlock?
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
December 11, 2007 at 7:01 am
Try:
INSERT INTO table_geography
SELECT @geolevel
WHERE NOT EXISTS
(
    SELECT *
    FROM table_geography G WITH (UPDLOCK )
    WHERE G.geolevel = @geolevel
)
December 11, 2007 at 7:53 am
I am sorry for giving you very little information.
Actual process: I check for @geolevel (doesn't exist), call another procedure (proc1 which takes couple of parameters along with @geolevel) and create's a table, create's clustered index and then insert data.
In the proc1: I call few base tables to get information corresponding to given @geolevel and then
create a dynamic sql stmt (to create table) and so on.
Proc1 contains alteast 100 lines of code.
December 11, 2007 at 9:15 am
SriSun (12/11/2007)
I am sorry for giving you very little information.Actual process: I check for @geolevel (doesn't exist), call another procedure (proc1 which takes couple of parameters along with @geolevel) and create's a table, create's clustered index and then insert data.
In the proc1: I call few base tables to get information corresponding to given @geolevel and then
create a dynamic sql stmt (to create table) and so on.
Proc1 contains alteast 100 lines of code.
It sounds like you're going to have to walk through your code and verify that everything is being accessed in the same order, meaning, that reads, inserts, updates, occur in the same way from proc to proc and not have one process that reads then inserts and the other that inserts then reads, stuff like that. That'll at least help mitigate the deadlocks. Then, the bigger issue is going to be preventing two processing from escalating their locks against the same resource, even in the same order. You might want to do the read with an UPDLOCK hint, maybe.
I'm still concerned about the primary key violations. That doesn't have anything directly to do with deadlocks, but it's an indication that your process is flawed and attempting to create the same primary key value twice. These flaws could also be leading to the deadlocks. Not knowing enough to tell you where to look, you have to identify where and why you're getting primary key violations.
It's hard to be any more precise than that. Sorry. I doubt this is very helpful.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
December 11, 2007 at 10:11 am
Thanks Grant Fritchey, its helpful.
Not all times I get primary key voilation or deadlock, but I am sure that I get any of these errors when I ran 10 instances at a time.
My advice from you would be like is it a good practice to use sp_getapplock and release the lock when work done.
Right now I implemented this approach (sp_getapplock) and my code works well with no primary key voilation and deadlocks.
I am Handling error code, efficiently using try-catch block.
Why I am asking this question when my code works well with sp_getapplock ? was, I did not find much help on sp_getapplock when try to search on web.
Any suggestions would be greatly appreciated.
Sri.
December 11, 2007 at 10:55 am
What you've done with sp_getapplock is completely lock down all the resources used by all the transactions within a given connection. It's a very harsh method for eliminating deadlocks. I suspect you'll see scalability issues over time.
It might be working, but it's still pointing out something not quite correct within the rest of your code.
I'd actually never even heard of that procedure before you mentioned it. I just read up on it in the Books Online.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
December 11, 2007 at 11:03 am
sp_GetAppLock basically allows you to use SQL Servers locking space to serialize database object access. I think it has its uses but understanding SQL Server's default locking architecture when writing code takes care of most concurrency problems. Grant is correct, to totally serialize your work is a bit harsh and at this point is nothing more than a bandaid. Just because it has helped you here, I would not get into the habit of serializing your work because it will slow down your system and ultimately the user's experience.
December 27, 2007 at 9:13 am
Hi,
I used UPDLOCK and it helped to get rid of deadlock, can anybody tell me if it is a good way to use UPDLOCK (since it works like serialization queue for furture processes).
Thanks,
Sri.
December 27, 2007 at 10:55 am
You haven't mentioned whether or not you're using an explicit transaction around all of those inserts and selects... that would be a real big cause of the deadlocks.
--Jeff Moden
Change is inevitable... Change for the better is not.
December 27, 2007 at 12:32 pm
I am not using any explicit transaction. Bound to defaul Read Committed Isolation Level.
Thanks,
Sri.
December 27, 2007 at 2:55 pm
SriSun (12/11/2007)
Thanks Grant Fritchey, its helpful.Not all times I get primary key voilation or deadlock, but I am sure that I get any of these errors when I ran 10 instances at a time.
My advice from you would be like is it a good practice to use sp_getapplock and release the lock when work done.
Right now I implemented this approach (sp_getapplock) and my code works well with no primary key voilation and deadlocks.
I am Handling error code, efficiently using try-catch block.
Why I am asking this question when my code works well with sp_getapplock ? was, I did not find much help on sp_getapplock when try to search on web.
Any suggestions would be greatly appreciated.
Sri.
Just use what was posted already by Ken McKelvey. It works!
* Noel
December 28, 2007 at 7:45 am
1) See this blog (http://blogs.msdn.com/bartd/archive/2006/09/09/Deadlock-Troubleshooting_2C00_-Part-1.aspx) and it's followons for an example of how select and indexing can lead to deadlocks you wouldn't expect. Also very good advice on how to troubleshoot deadlocks.
2) One possibility is to trap the PK violation and not do the insert if they happen. Paul Neilson recently used that as the most performant way to do seriously high volume (35Ktps) UPSERTs.
3) Ken's WHERE NOT EXISTS should work to avoid the PK violations but I am not certain it will avoid the deadlocks in all cases without appropriate indexing.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
Viewing 15 posts - 1 through 15 (of 20 total)
You must be logged in to reply to this topic. Login to reply