January 29, 2009 at 6:55 am
Inherited some code ... and an issue....
table: SessionData (sessionid varchar(50), somedata text)
unique index on sessionid
pseudo code proc: SessionUpdate
(
@sessionid varchar(50), @somedata text
)
declare @sessionexists bit
set @sessionexists = 0
update sessiondata
set
somedata = @somedata,
@sessionexists = 1
where
sessionid = @sessionid
insert into sessiondata (sessionid, somedata)
select @sessionid, @somedata
where
@sessionexists = 0
So essentially the proc updates an exisiting session data record OR adds a new one, using the local variable @sessionexists to control the logic - an UPSERT procedure as it's known.
All was going well in testing, and then it went live...
Error: 2601, Severity: 14, State: 1
Cannot insert duplicate key row in object 'dbo.SessionData' with unique index 'idx_sessiondata_sessionid'.
The statement has been terminated.
Using profiler it became clear that the calling application was calling the procedure in very quick succession, i.e.
RPC Starting : exec dbo.SessionUpdate 1, 'ahugeamountofdata'
RPC Starting : exec dbo.SessionUpdate 1, 'ahugeamountofdata'
RPC Completed : exec dbo.SessionUpdate 1, 'ahugeamountofdata'
Exception : Error: 2601, Severity: 14, State: 1
User Error Message : Cannot insert duplicate key row in object 'dbo.SessionData' with unique index 'idx_sessiondata_sessionid'.
User Error Message The statement has been terminated
RPC Completed : exec dbo.SessionUpdate 1, 'ahugeamountofdata'
where 'ahugeamountofdata' was actually a large chunk of data, and 'very quick succession' means no difference in the time stamps in Profiler.
So it seemed obvious, the multi-threaded app was calling the proc to update the session data, but before the initial insert could finish, a second call had started to update the session data. Talking to the devs revealed that the app was merely reading some data out of memory between the two calls, so it's no surprise that that was much faster than the write to the database, over the network, of a huge amount of BLOB data!!
Under normal condititons and READ COMMITTED isolation level, the second call has no record to update and hence follows the logic to perform the insert - BANG!! the unique index is violated.
The solution proposed was to wrap the code in a transaction and use locking hints to pump the isolation level to SERIALIZABLE, i.e.
psuedo code proc: SessionUpdate
(
@sessionid varchar(50), @somedata text
)
declare @sessionexists bit
set @sessionexists = 0
BEGIN TRAN
update sessiondata WITH (SERIALIZABLE)
set
somedata = @somedata,
@sessionexists = 1
where
sessionid = @sessionid
insert into sessiondata WITH (SERIALIZABLE) (sessionid, somedata)
select @sessionid, @somedata
where
@sessionexists = 0
COMMIT
My only concern now is the effect that this has on the concurrency and ultimately the performance of the app.
Any suggestions/comments?
Kev
January 30, 2009 at 4:46 am
Simply revert the order, first insert and then update:
begin try
insert...
set @inserted=1;
end try;
begin catch
set @inserted=0;
end catch;
if @inserted=0
update...;
January 30, 2009 at 5:02 am
As serializing will block inserts for other sessionIds, I would be inclined to use an
application lock.
(If the majority of your upserts are updates, I suspect replying on a CATCH block could be slow.
You may want to test this.)
I would suggest you try something like:
DECLARE @Resource nvarchar(255)
SET @Resource = N'LockSessionId' + @sessionId
BEGIN TRANSACTION
    EXEC sp_getapplock @Resource, N'Exclusive'
    UPDATE SessionData
    SET someData = @someData
    WHERE sessionId = @sessionId
    
    IF @@ROWCOUNT() = 0
        INSERT INTO SessionData(sessionId, someData)
        VALUES(@sessionId, @someData)
COMMIT
January 30, 2009 at 5:57 am
Thanks for the responses.
Robert - not sure how this guarantees to avoid the violation.
Ken - I hadn't thought about application locks - sounds good! I just need to do a little research - are there any pros/cons with app locks?
I'll give both of these a try in testing and report back!
Kev
January 30, 2009 at 6:36 am
It IMHO does.
First you try the insert and if you get a violation, you catch the exception and try update.
January 30, 2009 at 9:23 am
Interesting.....
Robert - it did seem to work OK, and I wasn't getting the error in SSMS in my test environment. Strangely though, the error was still raised when the code was traced using Profiler! I guess that is by default - the error still happens, it's just that the code handles it.
Not sure how comfortable I am raising hundreds of 'hidden' errors - apart from that its a good solution!
I need to keep testing with Ken's ideas - I'll have to compare the performance.
Kev
January 30, 2009 at 10:41 am
Ken's solution should work just fine, probably even better as it's unlikely to get an exception.
However, sometimes it's not desirable for a SP to handle transaction.
I'm interested in performance comparison in a heavy multithreaded test.
February 4, 2009 at 4:54 am
Robert - App Locks can be made to apply to the current session and not just the current transaction. It is then up to you to explicitly release the lock with sp_releaseapplock.
Kev - I would also be interested in the performance of a heavy multi-threaded test. Have you had a chance to do this yet?
February 4, 2009 at 5:33 am
Ken McKelvey (2/4/2009)
Robert - App Locks can be made to apply to the current session and not just the current transaction. It is then up to you to explicitly release the lock with sp_releaseapplock.Kev - I would also be interested in the performance of a heavy multi-threaded test. Have you had a chance to do this yet?
Why do you need to explicitly release the locks?
No sorry - haven't got round to testing this yet - snowed under with a multitude of other things! As usual.
Kev
February 4, 2009 at 5:38 am
You only need to explicitly release the lock if you set it as a SESSION lock otherwise the it will last for the rest of duration of the connection.
Thanks for the feedback.
Viewing 10 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply