trigger on a Large 5 billion row table

  • Hello,

    So we have noticed our trigger that does insert from one table to another if there is an update or insert, so we noticed the timestamp GETDATE() has been from seconds to almost minutes apart, wondering if anyone noticed this when handling large tables, and if there is anyway how to get around it? maybe a fix, or something like brent ozar said:

    https://www.brentozar.com/archive/2019/05/the-silent-bug-i-find-in-most-triggers/

    or maybe use service broker? if so how?

    also the destination table is also huge as well.

  • So we have noticed our trigger that does insert from one table to another if there is an update or insert, so we noticed the timestamp GETDATE() has been from seconds to almost minutes apart, wondering if anyone noticed this when handling large tables, and if there is anyway how to get around it?

    Yes.

    Are you talking about a column on the original table to which the trigger belongs (in which case presumably GETDATE() is a default value for that column?), or a column in the table inserted by the trigger (where you're inserting GETDATE() in the trigger)?

    If the second case , you could set a variable = GETDATE() at the start of the trigger and use that variable rather than GETDATE() in the insert. If the first case, you'd need to use the same approach, explicitly inserting a value rather than relying on the default.

    Out of curiosity: Why is it a problem that the timestamps vary? Are you trying to use that to identify a batch?  If so, you might consider creating an explicit BatchID, perhaps using a sequence.

    The article you referenced is discussing triggers that assume there is only one row affected (Even worse case is when the trigger inserts/updates for all rows in the table, ignoring inserted). Is the trigger limiting rows inserted into the other table by selecting from or joining to inserted?

     

  • Siten0308 wrote:

    Hello,

    So we have noticed our trigger that does insert from one table to another if there is an update or insert, so we noticed the timestamp GETDATE() has been from seconds to almost minutes apart, wondering if anyone noticed this when handling large tables, and if there is anyway how to get around it? maybe a fix, or something like brent ozar said:

    https://www.brentozar.com/archive/2019/05/the-silent-bug-i-find-in-most-triggers/

    or maybe use service broker? if so how?

    also the destination table is also huge as well.

    Heh... no... it's never happened to me and we have some fairly large tables (hundreds of millions of rows).

    You'll need to post the CREATE table for both tables including indexes and constraints and the code for the trigger if you want help with this one.  If you can capture it, posting the actual execution plan for the trigger in action would be very helpful, as well.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Hmm, that task sounds like it should be a single INSERT statement, which means SQL should resolve GETDATE() only once.  Therefore it seems as if your trigger is using a non-set-based method (perhaps even a cursor, gack!).

    SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".

  • thank you all for your input and yes, I do or should have provided an example, which in the future i will next time. but found the root cause, which was due to the combability level set from 2008 to 2014, then after resetting it to 2012, the trigger did improve and went back to benchmark, just need to look into any difference, maybe the execution plan differs from 2012 to 2014 because of the cardinality estimator? but either case, thank you all, definitely a big table and need to look into partitioning it.

  • Siten0308 wrote:

    thank you all for your input and yes, I do or should have provided an example, which in the future i will next time. but found the root cause, which was due to the combability level set from 2008 to 2014, then after resetting it to 2012, the trigger did improve and went back to benchmark, just need to look into any difference, maybe the execution plan differs from 2012 to 2014 because of the cardinality estimator? but either case, thank you all, definitely a big table and need to look into partitioning it.

    The only thing that partitioning typically solves is maintenance on the table, be it backups/restores, index maintenance, and temporal deletes.  Typically, partitioning doesn't solve performance issues and will actually slow things down a bit.  A lot of people think partitioning solves performance issues but it's usually due to the index they add so support the partitioning.  Such an index would also solve some performance issues on monolithic structures without any partitioning.

    Also and as you say, it appears that your trigger had a bit of a performance issue due to the cardinality estimator.  I've found that typically means that you really do need to tweak or rewrite the trigger code.  If you don't have time for that, then setting the compatibility level to 2012 or lower is a bit of a travesty.  I believe you can change the use of the cardinality estimator at a code level instead of at an instance level.

     

     

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply