When Would You Use the SERIALIZABLE Transaction Isolation Level

  • I've just spent the last few weeks repeatedly smacking my head against the wall between my developers (who are absolutely certain that they need WITH(NOLOCK) in all of their queries) and my fellow DBA's arguing over the merits and demerits of READ_COMMITTED_SNAPSHOT isolation.

    Finally, somebody said, "I'm sick of Transactions. I just want to write the data."

    It occurred to me that you could do exactly that (forget about transactions) by setting the transaction isolation level to SERIALIZABLE. (Sometimes I just amaze myself with my own brilliance! :hehe:)

    Which brings me to my question: when would you use SERIALIZABLE transaction isolation? Has anyone ever tried it in production? Enquiring minds want to know! 🙂

  • When I need to read from one table and write to another, but can't do it in a single query.

    I've had to do that in a couple of complex queries involving SSIS and cross-server (and cross-location, for that matter) transactions.

    One outputs multiple files from different tables, using SSIS, and there are needs for data to remain constant across multiple data exports, so that the data in the files has referential integrity maintained.

    A simpler solution for that would be to output a list of PK values for all the related tables, and use that in separate queries. I don't remember why I didn't do it that way, but it didn't handle quite everything I needed.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • David Moutray (12/29/2011)


    It occurred to me that you could do exactly that (forget about transactions) by setting the transaction isolation level to SERIALIZABLE. (Sometimes I just amaze myself with my own brilliance! :hehe:)

    Errr, not really.

    Serialisable is for when there absolutely must not be any effects of one transaction seen by another for any reason, in any way. So no dirty reads, no non-repeatable reads, no phantom rows. What one transaction does must be completely unreadable (ie locked) until that transaction commits or rolls back.

    It's for when you don't mind long-lasting wide locks held by any operation until the end of the transaction, and when you have sufficient error handling and retry logic for the deadlocks that will very likely occur.

    It's the most restrictive isolation level with the highest impact on concurrency.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (12/29/2011)


    David Moutray (12/29/2011)


    It occurred to me that you could do exactly that (forget about transactions) by setting the transaction isolation level to SERIALIZABLE. (Sometimes I just amaze myself with my own brilliance! :hehe:)

    Errr, not really.

    Serialisable is for when there absolutely must not be any effects of one transaction seen by another for any reason, in any way. So no dirty reads, no non-repeatable reads, no phantom rows. What one transaction does must be completely unreadable (ie locked) until that transaction commits or rolls back.

    It's for when you don't mind long-lasting wide locks held by any operation until the end of the transaction, and when you have sufficient error handling and retry logic for the deadlocks that will very likely occur.

    It's the most restrictive isolation level with the highest impact on concurrency.

    On his other thread, I just finished suggesting (not seriously) that he require all the devs to replace NoLock with TablockX in all their queries. Imagine all the fun!

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • GilaMonster (12/29/2011)It's for when you don't mind long-lasting wide locks held by any operation until the end of the transaction, and when you have sufficient error handling and retry logic for the deadlocks that will very likely occur.

    It's the most restrictive isolation level with the highest impact on concurrency.

    Well, it seems to me (in my role as Devil's Advocate) that it would require you to fix transaction locking issues arising from crappy code.

    (Am I allowed to use the word "crappy" on this site? :ermm:)

    Seriously, if I have the opportunity to try it on a new system, I might put it in and forget to tell the developers (aka "the enemy") about it. :crazy:

  • David Moutray (12/29/2011)


    Well, it seems to me (in my role as Devil's Advocate) that it would require you to fix transaction locking issues arising from crappy code.

    Probably more than that. Even in a well-written system I'd only use serialisable in specific places where needed, not everywhere.

    Seriously, if I have the opportunity to try it on a new system, I might put it in and forget to tell the developers (aka "the enemy") about it. :crazy:

    The only problem with that is that you can't set it as a default, it has to be explicitly specified in all sessions:

    SET TRANSACTION ISOLATION LEVEL SERIALIZABLE

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GSquared (12/29/2011)


    On his other thread, I just finished suggesting (not seriously) that he require all the devs to replace NoLock with TablockX in all their queries. Imagine all the fun!

    Well, it would increase the duration of tea breaks and the calls to the devs about bad code. Just for that alone it could be massive fun.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • David Moutray (12/29/2011)Seriously, if I have the opportunity to try it on a new system, I might put it in and forget to tell the developers (aka "the enemy") about it. :crazy:

    Well... even if you manage to set it that way the whole thing will explode on your hands; as DBA you are responsible for performance, remember? 🙂

    _____________________________________
    Pablo (Paul) Berzukov

    Author of Understanding Database Administration available at Amazon and other bookstores.

    Disclaimer: Advice is provided to the best of my knowledge but no implicit or explicit warranties are provided. Since the advisor explicitly encourages testing any and all suggestions on a test non-production environment advisor should not held liable or responsible for any actions taken based on the given advice.
  • I have used serializable isolation level. I think of it as locking a key range.

    There is no MERGE in SS 2005, which I was constrained to until recently, so I used it as part of "upsert" statements. The standard upsert is to update then check the row count and insert if zero. But another developer had written hundreds of upsert procedures by getting a count, inserting if zero and updating if not. Those procedures didn't use explicit transactions. To fix them, I didn't want to spend the time to turn the logic around--I just added transactions and isolation.

    Generalized example:

    DECLARE @rowcnt INT

    -- added these two lines, xact_abort causes rollback on any error

    SET XACT_ABORT ON

    BEGIN TRANSACTION

    -- added lock, locks the key to be inserted or updated

    SELECT @rowcnt = COUNT(*)

    FROM targetTable WITH (SERIALIZABLE)

    WHERE keyCol = @param

    IF @rowcnt = 0

    INSERT targetTable VALUES (@param, @otherParams)

    ELSE

    UPDATE targetTable SET someCol = @otherParams

    WHERE keyCol = @param

    -- added commit

    COMMIT

    Without the serializable lock, it's possible for another transaction to insert a row with the target key between the time that the count returns zero and before the row is actually inserted. That would cause this procedure to fail with a primary key violation, which is incorrect, because if the row existed, it should have been updated.

    Serializable is the only isolation level that can lock non-existent rows by locking a key range, ASFAIK. There are other, more complicated, scenarios in which I use it to protect a key range that I may be inserting into after I've completed some other processing.

  • Ah, yes, I am responsible for performance, and that is just my frustration. These so-called developers are allowed to implement code which "works" on their test system with 16 records and two or three users. They get their bonus checks and go home.

    The code is deployed into production, and it "works" until it starts seeing some serious use. Then I, the database administrator, get called and asked what is wrong with "my" system! :crazy:

    I'd just like to push the problem back a bit. I'd like to force the developers to really write workable code. 🙂

    Genius, no?

  • Stephanie Giovannini (12/29/2011)


    I have used serializable isolation level. I think of it as locking a key range.

    Now that is a very interesting (serious) use case. I will have to keep that in mind.

  • David Moutray (12/29/2011)


    Ah, yes, I am responsible for performance, and that is just my frustration. These so-called developers are allowed to implement code which "works" on their test system with 16 records and two or three users. They get their bonus checks and go home.

    The code is deployed into production, and it "works" until it starts seeing some serious use. Then I, the database administrator, get called and asked what is wrong with "my" system! :crazy:

    I'd just like to push the problem back a bit. I'd like to force the developers to really write workable code. 🙂

    Genius, no?

    All you have to do is to deploy and refresh as needed a production size QA (Quality Assurance) clone of your production database and get the head of IT to homologate "Production Deployment Guidelines" which should include testing and business approval on such QA database.

    _____________________________________
    Pablo (Paul) Berzukov

    Author of Understanding Database Administration available at Amazon and other bookstores.

    Disclaimer: Advice is provided to the best of my knowledge but no implicit or explicit warranties are provided. Since the advisor explicitly encourages testing any and all suggestions on a test non-production environment advisor should not held liable or responsible for any actions taken based on the given advice.
  • If I may suggest....

    This is not a technical problem and should not be approached as one, and approaching it as a battle is going to make matters worse, not better.

    Perhaps a discussion with the dev's manager, and address your concern about the vast difference between the test environment (those 16 rows) and production and how that might be making the dev's work harder (take the position that you're attempting to help them at your expense). See if you can get approval to create representative data volumes (may I suggest RedGate's SQLDataGenerator if necessary).

    If that doesn't work, a discussion with your manager. Get some stats about how often new code is a performance problem and push for load testing prior to implementation to try and improve the quality of the code and reduce the problems that the user has (not reduce your work). Try also push for involving the devs in performance problems on the idea that it's their code so they know it far better than you do.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Well, I'd also have to simulate a production load on the QA and DEV systems. I'd have to get the head of IT do ... well, several things. Sigh. Why does everything require cooperation! :crazy:

    All I want to do is impose my will on AppDev. Is that so much to ask?

  • GilaMonster (12/29/2011)


    If I may suggest....

    This is not a technical problem and should not be approached as one, and approaching it as a battle is going to make matters worse, not better.

    Perhaps a discussion with the dev's manager, and address your concern about the vast difference between the test environment (those 16 rows) and production and how that might be making the dev's work harder (take the position that you're attempting to help them at your expense). See if you can get approval to create representative data volumes (may I suggest RedGate's SQLDataGenerator if necessary).

    If that doesn't work, a discussion with your manager. Get some stats about how often new code is a performance problem and push for load testing prior to implementation to try and improve the quality of the code and reduce the problems that the user has (not reduce your work). Try also push for involving the devs in performance problems on the idea that it's their code so they know it far better than you do.

    As always, you are ever the voice of reason, Gail.

Viewing 15 posts - 1 through 15 (of 16 total)

You must be logged in to reply to this topic. Login to reply