Small Transactional Table with Frequent Updates

  • Hi All

    We have a 12 row on our platform that gets updated for every transaction and stores Identity values used by each transaction. Every time a new transaction is received, the appropriate transaction row is incremented by 1. The trouble we're seeing more and more lately is contention through that table during busy periods, which slows down all transactions as they all come through this table.

    Table definition is as follows:

    CREATE TABLE [dbo].[Table](
    [Column1] [varchar](12) NOT NULL,
    [Column2] [varchar](4) NOT NULL,
    [Identity] [bigint] NOT NULL,
    CONSTRAINT [PK_Table1] PRIMARY KEY CLUSTERED
    (
    [Column1] ASC,
    [Column2] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [Primary]
    ) ON [Primary]
    GO

    The rows are updated based on column1 and column2 being specified in the where clause of the update.

    I've been trying to think of ways to speed up transactions through this table, that would involve minimal development, so far my options are:

    • Making this an in memory table
    • Moving the table to a less contended disk
    • Key adjustments, although I don't know how considering both columns used to update are in the clustered key

    The table is updated by the millisecond, and 2 of the 12 rows are the ones most frequently updated.

    Any advice would be welcome.

  • Have you looked at blocking to see what types of locks are blocking?

    Are the sort values of [colum1], [column2] always sequential? If they are not, you will be inserting into the middle of the index and causing page splits and blocking.

    What purpose is the identity column serving? Is it just there to be there? If so it is just adding overhead needlessly.

    Ideally, move the clustered index to the identity column and then if you need something to enforce uniqueness on column1, 2, put a unique non-clustered index on it.

    If you can trust the application won't generate duplicates, you can try dropping the primary key constraint, which adds overhead. This is a pretty big decision that you would have to carefully evaluate.

    If you can't change the table, you could try to rebuild the index with lots of fill factor but if the live table has many reads from it that do scans, this could increase blocking. You probably should not be hitting the live table that hard with reads already if it is already experiencing problems. Replicate or log ship it out if you need to.

  • Have you looked at blocking to see what types of locks are blocking?

    Are the sort values of [colum1], [column2] always sequential? If they are not, you will be inserting into the middle of the index and causing page splits and blocking.

    What purpose is the identity column serving? Is it just there to be there? If so it is just adding overhead needlessly.

    Ideally, move the clustered index to the identity column and then if you need something to enforce uniqueness on column1, 2, put a unique non-clustered index on it.

    If you can trust the application won't generate duplicates, you can try dropping the primary key constraint, which adds overhead. This is a pretty big decision that you would have to carefully evaluate.

    If you can't change the table, you could try to rebuild the index with lots of fill factor but if the live table has many reads from it that do scans, this could increase blocking. You probably should not be hitting the live table that hard with reads already if it is already experiencing problems. Replicate or log ship it out if you need to.

  • Have you looked at blocking to see what types of locks are blocking?

    Are the sort values of [colum1], [column2] always sequential? If they are not, you will be inserting into the middle of the index and causing page splits and blocking.

    What purpose is the identity column serving? Is it just there to be there? If so it is just adding overhead needlessly.

    Ideally, move the clustered index to the identity column and then if you need something to enforce uniqueness on column1, 2, put a unique non-clustered index on it.

    If you can trust the application won't generate duplicates, you can try dropping the primary key constraint, which adds overhead. This is a pretty big decision that you would have to carefully evaluate.

    If you can't change the table, you could try to rebuild the index with lots of fill factor but if the live table has many reads from it that do scans, this could increase blocking. You probably should not be hitting the live table that hard with reads already if it is already experiencing problems. Replicate or log ship it out if you need to.

  • CreateIndexNonclustered wrote:

    What purpose is the identity column serving? Is it just there to be there? If so it is just adding overhead needlessly.

    Ideally, move the clustered index to the identity column and then if you need something to enforce uniqueness on column1, 2, put a unique non-clustered index on it.

    I'm not the OP, but this answer made me reread the post. The column named identity is not an identity, but I am not sure what this means. "stores Identity values used by each transaction"

    @woundedparrot Is the value generated by the transaction and updated using a parameter?

    set identity = @param where..

    or is the value literally incremented by 1?

    set identity +=1 where ...

    What is the purpose of recording the latest transaction? Do you ever read from the table? I assume you're not using it to generate the "identity" values the transactions use.

    Have you confirmed that the bottleneck is the write speed of this table? It's such a small table and the updates are so simple it's hard to imagine the table update being slower than the transactions themselves. I don't think clustered index adjustments would make any difference as the table is so small it probably fits on a single page.

    Is the update part of the transaction? or is it executed after the transaction has completed? If it's part of the transaction, is the table locked by every transaction until it completes?

     

  • I don't think you can do much for that table itself.

    You need to look at the code that is UPDATEing that table and make sure that code is efficient.  Also, make sure any transaction used to load other tables and UPDATE this table are as short as possible, i.e. the transaction does the min data mods necessary and commits as soon as it can.

     

    SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".

  • oh good catch. It keyword formatted 'identity' and it confused me.

    I took OP's original statement to mean that the process worked with 12 rows at a time and 12 new rows were inserted when each transaction completed.

    If it is updating an existing 12 records and keying by column1, 2, I don't see why it would have a problem at all

  • This appears to be a classic "Next_ID" table and I find it difficult to understand why anyone would be using such a thing in this day and age, especially with the advent of SEQUENCE in SQL Server.  My first recommendation would to be to dump the table and "do it right" with SEQUENCEs.

    That, apparently not withstanding, there is an easy method to make this bullet-proof and virtually contention free.  We had such a table at a place I worked at nearly 2 decades ago that was causing between 400 and 6000 deadlocks in an 8 hour day.  After the fix I made with the suggestion from the DBA at the time, that dropped to a permanent ZERO deadlocks per day.

    To science out a solution for you, I need to see the current "UPDATE" statement(s) you're using and I need the contents of the table in a "Readily Consumable" format that will insert the data into the test table that I'll make to demonstrate the fix.  See the article at the first link in my signature line for one of many ways to provide such "Readily Consumable" data.

    p.s. The key to this bad boy is to make a transactionally consistent UPDATE that does the +1 update and returns the new value without an explicit transaction that contains both an UPDATE and a SELECT (sure fire deadlocks, there) and that's easy to do in SQL Server.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Not sure what is the data size of the table?

    =======================================================================

  • Jeff Moden wrote:

    This appears to be a classic "Next_ID" table and I find it difficult to understand why anyone would be using such a thing in this day and age, especially with the advent of SEQUENCE in SQL Server.  My first recommendation would to be to dump the table and "do it right" with SEQUENCEs.

    That, apparently not withstanding, there is an easy method to make this bullet-proof and virtually contention free.  We had such a table at a place I worked at nearly 2 decades ago that was causing between 400 and 6000 deadlocks in an 8 hour day.  After the fix I made with the suggestion from the DBA at the time, that dropped to a permanent ZERO deadlocks per day.

    To science out a solution for you, I need to see the current "UPDATE" statement(s) you're using and I need the contents of the table in a "Readily Consumable" format that will insert the data into the test table that I'll make to demonstrate the fix.  See the article at the first link in my signature line for one of many ways to provide such "Readily Consumable" data.

    p.s. The key to this bad boy is to make a transactionally consistent UPDATE that does the +1 update and returns the new value without an explicit transaction that contains both an UPDATE and a SELECT (sure fire deadlocks, there) and that's easy to do in SQL Server.

    Yes, sequences are designed for this:

    https://learn.microsoft.com/en-us/sql/t-sql/statements/create-sequence-transact-sql?view=sql-server-ver16

  • Ladies and gentlemen, the OP has left the building! 😀

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden wrote:

    Ladies and gentlemen, the OP has left the building! 😀

    Interesting .... you must have pushed to hard to educate.

    Such a shame ... especially when someone is willing to fix your pattern for problems without the need for loads of $$

    Johan

    Learn to play, play to learn !

    Dont drive faster than your guardian angel can fly ...
    but keeping both feet on the ground wont get you anywhere :w00t:

    - How to post Performance Problems
    - How to post data/code to get the best help[/url]

    - How to prevent a sore throat after hours of presenting ppt

    press F1 for solution, press shift+F1 for urgent solution 😀

    Need a bit of Powershell? How about this

    Who am I ? Sometimes this is me but most of the time this is me

  • Johan Bijnens wrote:

    Jeff Moden wrote:

    Ladies and gentlemen, the OP has left the building! 😀

    Interesting .... you must have pushed to hard to educate.

    Such a shame ... especially when someone is willing to fix your pattern for problems without the need for loads of $$

    It is a shame, indeed.  Two lines of code.  I just needed to see an example of the update they're currently using to make sure I get the gazintas right.  Of course, the recommendation to use a SEQUENCE is the right way to do it.  I just hate it when the OP never comes back to say they're OK or what and what they've done.  All that stuff the OP talked about, like making it an In-Memory table just isn't necessary even if they decide to not use a SEQUENCE.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • This was removed by the editor as SPAM

  • This was removed by the editor as SPAM

Viewing 15 posts - 1 through 15 (of 16 total)

You must be logged in to reply to this topic. Login to reply