Trigger help - if update()

  • Hello everyone,

    I have an update trigger on one of my tables setup like this:

    create trigger tu_log on my_table

    for update

    as

    if update(column1)

    begin

    insert into log_table (ID, LOG_DESCRIPTION, DATE_WRITE, USER_ID)

    selectins.ID, 'Changed column1 from ' + del.column1 + ' to ' + ins.column1,

    getdate(), ins.USER_ID

    frominserted ins

    inner join deleted del on ins.ID = del.ID

    whereins.column1 <> del.column1

    end

    if update(column2)

    ...

    When I was testing, I ran an update on column1, but didn't change the value (so it wouldn't be entered into the LOG_TABLE). The execution plan shows that it is running the 'insert into ...' statement even though it doesn't find any records to insert in the LOG_TABLE.

    Is there anyway to bypass the whole 'insert into ...' statement all together? Like an 'if CHANGED(column)' or something of the sort, rather than 'if update(column)' since it returns true even if the value doesn't change.

    One way I tried to get around this was to do an 'if exists(...)' on what would be inserted into the LOG_TABLE. That way, if the value wasn't changed, it wouldn't run the insert statement. It was fine until I started updating more and more records. I updated 1000 records (but the value didn't change) and it was painfully slow.

    Does anyone have any other suggestions?

    Any and all help is greatly appreciated.

    Thanks.

  • The insertion of zero rows probably takes less processing time than would any test to determine if the insert will have zero rows.

    --Jonathan



    --Jonathan

  • You cannot get around the limitations of the 'if UPDATE()' statement.

    Unless you use another kind of test like 'EXISTS'.

    But then again, doing the INSERT with no rows anyway is certainly faster than the EXISTS and then the INSERT.

  • Jonathan - Why is it that the insert of zero rows would take less time than the exists? SQL has to build the result set either way, right?

    NPeeters - I agree that an insert with no rows would be faster than an exists and the insert. But, these columns aren't updated very often and I wanted to eliminate the cost of the insert.

    Here's how I implemented the 'if exists()'

    if updated(column1) and exists(select top 1 ins.id from inserted ins inner join deleted del on ins.id = del.id where ins.column1 <> del.column1)

    begin

    ...

    end

    ...

    The exists call runs the same statement as the insert; it just checks to see if there are any rows that were changed. But for some reason, when it is used in the insert statement, it runs much faster. I can't explain it. I guess I figured that an exists would be faster since it would only have to find at least one record that met the criteria.

    One thing I noticed in the execution plan, the scan on the inserted table shows a row count of 1,000 (the number of 'updated' records). However, the scan on the deleted table shows 1,000,000. Any reason why the row count on the deleted table would be so high, (1,000 * 1,000)?

  • quote:


    Jonathan - Why is it that the insert of zero rows would take less time than the exists? SQL has to build the result set either way, right?


    Right; but I guess Noel expressed it better than I did, given your concession below.

    quote:


    NPeeters - I agree that an insert with no rows would be faster than an exists and the insert. But, these columns aren't updated very often and I wanted to eliminate the cost of the insert.


    Why not eliminate "the cost of the insert" by not performing the update to the column if it's updating to the same value? Then you won't need this baggage in the trigger and you won't be unnecessarily logging nearly as much.

    quote:


    Here's how I implemented the 'if exists()'

    if updated(column1) and exists(select top 1 ins.id from inserted ins inner join deleted del on ins.id = del.id where ins.column1 <> del.column1)

    begin

    ...

    end


    Uh, you realize that this will add all rows into your audit table even if only one of them has this column's value changed? I doubt that's what you're after.

    quote:


    The exists call runs the same statement as the insert; it just checks to see if there are any rows that were changed. But for some reason, when it is used in the insert statement, it runs much faster. I can't explain it. I guess I figured that an exists would be faster since it would only have to find at least one record that met the criteria.


    The reason it is so much faster is that it just inserts all rows or none, not just the ones where that column's value changed, so it doesn't have to evaluate each row.

    quote:


    One thing I noticed in the execution plan, the scan on the inserted table shows a row count of 1,000 (the number of 'updated' records). However, the scan on the deleted table shows 1,000,000. Any reason why the row count on the deleted table would be so high, (1,000 * 1,000)?


    Expected behavior with a NOT predicate. It must check each row against each row.

    --Jonathan



    --Jonathan

  • quote:


    Uh, you realize that this will add all rows into your audit table even if only one of them has this column's value changed? I doubt that's what you're after.


    Not to worry jarretg, your query is correct. By joining in the INSERT statement inside your IF statement, you will only insert the values that have effectively changed.

    But, like we said before, you would be essentially performing this test in the INSERT. So there should be no need to do a previous EXISTS check...

  • quote:


    quote:


    Uh, you realize that this will add all rows into your audit table even if only one of them has this column's value changed? I doubt that's what you're after.


    Not to worry jarretg, your query is correct. By joining in the INSERT statement inside your IF statement, you will only insert the values that have effectively changed.


    Yes, that's true if you're performing the test again there; I assumed that you were trying to do the test just once. Sorry.

    I guess you could find the rows where that column's values have changed and insert them into a table variable and then, if @@ROWCOUNT > 0, insert into the audit table from the table variable.

    --Jonathan



    --Jonathan

  • I would like to take out the columns from the update statement, but unfortunately, I can't. The users have to have the functionality to update these 'not so frequently updated' columns.

    Sorry, I should have specified that the select statement was part of the exists call and the insert statement. It's like this:

    if update(column1) and exists(select top 1 ins.id from inserted ins inner join deleted del on ins.id = del.id where ins.column1 <> del.column1)

    begin

    insert into log_table (ID, LOG_DESCRIPTION, DATE_WRITE, USER_ID)

    select ins.ID, 'Changed column1 from ' + del.column1 + ' to ' + ins.column1,

    getdate(), ins.USER_ID

    from inserted ins

    inner join deleted del on ins.ID = del.ID

    where ins.column1 <> del.column1

    end

    I can try the table variable and see how that affects the performance of the trigger. But wouldn't that be even more redundant; inserting the changed records into a table variable, then if there are any, inserting them into the audit table? I wanted to get rid of the insert call if possible.

    I still don't understand why the exists call would take so long, but the same select in the insert statement doesn't if both have to build the same result set. Doesn't the 'if exists()' stop searching after it finds one that matches the given criteria?

    Thanks for the suggestions/comments though!!!

    Jarret

  • quote:


    I would like to take out the columns from the update statement, but unfortunately, I can't. The users have to have the functionality to update these 'not so frequently updated' columns.


    Perhaps that functionality could call a different SP. I.e. check for changes on the front-end and call differing SPs. Not much fun if you're dealing with a lot of potentially updated columns, though.

    quote:


    I can try the table variable and see how that affects the performance of the trigger. But wouldn't that be even more redundant; inserting the changed records into a table variable, then if there are any, inserting them into the audit table? I wanted to get rid of the insert call if possible.


    I don't know if the performance would be better; I just thought it would require one scan rather than two of the table if there is anything updated.

    quote:


    I still don't understand why the exists call would take so long, but the same select in the insert statement doesn't if both have to build the same result set. Doesn't the 'if exists()' stop searching after it finds one that matches the given criteria?


    Yes, although I don't know why you're using "TOP 1 ins.ID" rather than just "*" in the existence statement. But remember that any statement with NOT will perform relatively badly.

    Another completely different auditing strategy is to include all the relevant columns in the audit table and just insert those columns plus an operation code and tracking values like user and date into the audit table on inserts, deletes, and updates. Then write auditing utility code to determine what was changed. No need to save the value to which any column was changed, as you can just compare with the subsequent values from the audit or live table.

    --Jonathan

    Edited by - Jonathan on 10/23/2003 12:00:19 PM



    --Jonathan

  • I just tried to use 'count(*) > 0' instead of 'exists()' and it ran much faster!!! I still have to check to see if the count implementation is going to be faster than inserting zero rows. I'll do some testing on it and let you know my results.

    quote:


    Another completely different auditing strategy is to include all the relevant columns in the audit table and just insert those columns plus an operation code and tracking values like user and date into the audit table on inserts, deletes, and updates. Then write auditing utility code to determine what was changed. No need to save the value to which any column was changed, as you can just compare with the subsequent values from the audit or live table.


    This would be a good idea, I'll have to get with the developers on this.

    I'll keep that in mind about the NOT being bad for performance.

    Thanks for all of your help Jonathan!!!

    Jarret

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply