February 11, 2008 at 6:56 am
Do I need to use SET ISOLATION LEVEL SERIALIZABLE in stored procedures where queries are typically dependent on existing data? I always assumed query plan would be smart enough to always lock the rows it needs, or thinks it might need, to ensure the integrity of the transaction. ie. :
SELECT something FROM MyTable
... bit of TSQL logic
SELECT somethingelse FROM MyTable
... bit of TSQL logic
INSERT INTO MyTable ( bla ) VALUES ( something, somethingelse, somelogic, somemorelogic)
Here I am inserting into MyTable the results of previous queries. Presumably if I don't set the isolation level correctly, another user could modify the data in MyTable representing "something" and "somethingelse", resulting in a potential loss of integrity in my database (or at least an exception back to the user when the insert goes ahead).
Any general rules on using it then?
Thanks.
February 12, 2008 at 11:32 am
By using "SET ISOLATION LEVEL SERIALIZABLE" in your stored procedure, you just about guarantee the death of performance. I recently came across this bottleneck in a database having the requirement of processing an insane number of transactions per second in the very near future. Under heavy load, excessive Lock Waits caused the performance to drag down to a screeching halt. Deadlocks became an important issue to resolve.
The solution I came up with after a lot of experimentation was to implement a narrow Unique Key Index constraints on the tables, and utilizing Try/Catch blocks to handle any violations of the Unique Key Constraints.
The [MYTABLE] table has a Identity column of datatype BIGINT, utilized in the foreign key relation to the child table, [OTHERTABLE]. [MYTABLE] has the Unique Key defined on a narrow set of columns: ([Value1] ASC, [Value2] ASC, [Value3] ASC). [OtherTable] has the Unique Key defined on a narrow set of columns: ([ParentID] ASC, [SomeValue] ASC). I do not wrap this within a transaction, and let the Unique Key throw an exception in the event where a duplicate row would be inserted.
-- [cut]
DECLARE @ID BIGINT
BEGIN TRY
INSERT INTO [dbo].[MYTABLE] (
[Value1],
[Value2],
[Value3 )
SELECT
@Value1,
@Value2,
@Value3;
SELECT @ID = SCOPE_IDENTITY();
END TRY
BEGIN CATCH
SELECT @ID = [ID] FROM [dbo].[MYTABLE] WITH (NOLOCK)
WHERE [Value1] = @Value1
AND [Value2] = @Value2
AND [Value3] = @Value3;
END CATCH
IF @ID IS NOT NULL
BEGIN
BEGIN TRY
INSERT INTO [dbo].[OTHERTABLE] (
[PARENTID],
[SomeValue],
[SomeMessage] )
SELECT
@ID,
@SomeValue,
@SomeMessage;
END TRY
BEGIN CATCH
-- Ignore duplicate, or insert some logic here.
END CATCH
END
RETURN
After this, always check your Query Execution Plan to see if your indexes are giving you the maximum benefit.
---------
Hope this helps,
-Mike Gercevich
February 12, 2008 at 12:17 pm
Why not do it in one statement, so that is will execute in a single transaction?
insert into MyTable
( something, somethingelse, ...)
select
a.something,
b.somethingelse,
...
from
MyTable a,
MyTable b,
...
where
a.criteria = ...and
b.criteria = ... and so on
February 12, 2008 at 10:13 pm
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply