Blocking on tempdb system tables

  • ken.yannitell (9/14/2010)


    Use this statement at the top of your query:

    SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED

    This will make any table look up perform the same as the Hint (With NOLOCK) on each Join

    The only drawback is that it allows for Dirty reads on the table you are referencing so if your data is highly time sensitive, this may or may not work.

    This is all true, but unfortunately doesn't solve the issue of a temporary table being created in a long-running explicit transaction.

  • Yes this would remove the blocking, I tried querying sys.partition with (nolock) by myself and its ok, but both blocked and blocking queries are from SCOM suite, I don't have the chance to edit them, opened a MS case for SCOM,

    Are the locks in system tables by design?

  • It's the combination of that and creating it in a transaction thats causing the issue here. SQL needs to keep the lock on those tables in case you decide to rollback the transaction, therefore no other data can be written/read from them while the transaction is open. If the code was done outside of a transaction, there would be no issue. If the blocked procedure was done with read_uncommitted, there would be no issue (besides dirty data). But, have all of these together, and you get an awesome blockfest.

  • ken.yannitell (9/14/2010)


    Use this statement at the top of your query:

    SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED

    This will make any table look up perform the same as the Hint (With NOLOCK) on each Join

    The only drawback is that it allows for Dirty reads on the table you are referencing so if your data is highly time sensitive, this may or may not work.

    No... Dirty Reads are NOT the only drawback. You can also get duplicate reads from this because you can read the data before a state change and then again after it's been updated. At best, messing with the transaction isolation level is a patch, not a fix. The best thing to do is to correctly fix the problem and that would require fixing the code.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Derrick Smith (9/14/2010)


    It's the combination of that and creating it in a transaction thats causing the issue here. SQL needs to keep the lock on those tables in case you decide to rollback the transaction, therefore no other data can be written/read from them while the transaction is open. If the code was done outside of a transaction, there would be no issue. If the blocked procedure was done with read_uncommitted, there would be no issue (besides dirty data). But, have all of these together, and you get an awesome blockfest.

    No... Dirty Reads are NOT the only issue with read uncommitted. Please read my previous post.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden (9/14/2010)


    Derrick Smith (9/14/2010)


    It's the combination of that and creating it in a transaction thats causing the issue here. SQL needs to keep the lock on those tables in case you decide to rollback the transaction, therefore no other data can be written/read from them while the transaction is open. If the code was done outside of a transaction, there would be no issue. If the blocked procedure was done with read_uncommitted, there would be no issue (besides dirty data). But, have all of these together, and you get an awesome blockfest.

    No... Dirty Reads are NOT the only issue with read uncommitted. Please read my previous post.

    Maybe my terminology is wrong but I have always considered "dirty data" as anything that is not 100% accurate, complete, and 1:1 of expected results. Thats how the Oracle guys I trained with explained it to me way back in the day, anyway.

  • Derrick Smith (9/15/2010)


    Jeff Moden (9/14/2010)


    Derrick Smith (9/14/2010)


    It's the combination of that and creating it in a transaction thats causing the issue here. SQL needs to keep the lock on those tables in case you decide to rollback the transaction, therefore no other data can be written/read from them while the transaction is open. If the code was done outside of a transaction, there would be no issue. If the blocked procedure was done with read_uncommitted, there would be no issue (besides dirty data). But, have all of these together, and you get an awesome blockfest.

    No... Dirty Reads are NOT the only issue with read uncommitted. Please read my previous post.

    Maybe my terminology is wrong but I have always considered "dirty data" as anything that is not 100% accurate, complete, and 1:1 of expected results. Thats how the Oracle guys I trained with explained it to me way back in the day, anyway.

    Understood. Your Oracle guys are pretty much spot on...

    The problem in SQL Server is that it IS possible to read such "dirty data" twice... once before it's updated and once after it's updated. That why I said that "Dirty Reads are not the only issue with read uncommitted". The other issue is the possibility (high probability in a very active system) that duplicates will be read.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden wrote:

    Derrick Smith (9/15/2010)


    Jeff Moden (9/14/2010)


    Derrick Smith (9/14/2010)


    It's the combination of that and creating it in a transaction thats causing the issue here. SQL needs to keep the lock on those tables in case you decide to rollback the transaction, therefore no other data can be written/read from them while the transaction is open. If the code was done outside of a transaction, there would be no issue. If the blocked procedure was done with read_uncommitted, there would be no issue (besides dirty data). But, have all of these together, and you get an awesome blockfest.

    No... Dirty Reads are NOT the only issue with read uncommitted. Please read my previous post.

    Maybe my terminology is wrong but I have always considered "dirty data" as anything that is not 100% accurate, complete, and 1:1 of expected results. Thats how the Oracle guys I trained with explained it to me way back in the day, anyway.

    Understood. Your Oracle guys are pretty much spot on... The problem in SQL Server is that it IS possible to read such "dirty data" twice... once before it's updated and once after it's updated. That why I said that "Dirty Reads are not the only issue with read uncommitted". The other issue is the possibility (high probability in a very active system) that duplicates will be read.

    The dup-read issue being unique to NOLOCK is a misunderstanding.  Per Paul White (emphasis added):

    https://sqlperformance.com/2015/04/t-sql-queries/the-read-uncommitted-isolation-level

    "

    So that we start with the correct context, here is a list of the main concurrency effects that can be experienced under the SQL Server default locking read committed isolation level:

    Missing committed rows

    Rows encountered multiple times

    Different versions of the same row encountered in a single statement/query plan

    Committed column data from different points in time in the same row (example)

    These concurrency effects are all due to the locking implementation of read committed only taking very short-term shared locks when reading data. The read uncommitted isolation level goes one step further, by not taking shared locks at all, resulting in the additional possibility of "dirty reads."

    "

    The part unique to NOLOCK is:

    "

    Missing committed rows or encountering them multiple times due to an allocation-ordered scan over changing data is specific to using read uncommitted isolation.

    "

    You can prevent any allocation-ordered scans by setting the system cursor threshold setting.  IF you (almost) never use cursors in prod code, you might consider using that to reduce the possible problems with NOLOCK.

    SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".

Viewing 8 posts - 16 through 22 (of 22 total)

You must be logged in to reply to this topic. Login to reply