Trigger to track changes

  • I need to track changes when a column is updated on two tables.

    Is there a way to do it. please send code for that.

  • What do you mean by 'track', exactly? Insert details of change to an audit table? Use SQL Server's 'Change Tracking'? Send an e-mail alert? Something else?

    The absence of evidence is not evidence of absence
    - Martin Rees
    The absence of consumable DDL, sample data and desired results is, however, evidence of the absence of my response
    - Phil Parkin

  • I think to check multiple tables for selective column updates, to remove dependency.

     

    Need to identify those and move those to another table

  • My opinion, the easiest way to set this up (as a SHORT TERM thing) would be an after update trigger then. I say short term as triggers are "hidden problems" in my mind. They are not always obvious that they exist and they can cause all sorts of unexpected problems like deadlocks and blocking.

    If you want a long term solution, I'd be looking at running an extended events session writing to disk and review it. This approach is nice because there is VERY little impact to the database system (no new tables, no risk of blocking/deadlocks due to it). I am suggesting writing to disk so you can review it without impacting the database system and you can have a large dump without wasting RAM.

    Another option is CDC (change data capture). Not an approach I've done before, but I did look into it at one point. Or you could use temporal tables as you are on SQL 2022... another thing I've never set up, but I've read about and it sounds like it may be a good option for what you are looking for.

    BUT all of the above suggestions REALLY depend on what you are trying to track. Are you tracking all columns? some columns? non-column data (who made the change and when)?

    If you want a proper fix, I'd review the code that writes to the table. Users shouldn't be writing to the table directly, so an application SHOULD be doing it and it SHOULD be calling a stored procedure to do that, so you may be able to use some tool like SQL Search (RedGate tool that if I remember right is free) to find all stored procedures that touch that table and you should be good to go from there. That is assuming nobody hard-coded SQL into their code in which case you will need to review the source code that writes to that table to determine how it is doing it.

    One thing about watching for changes in a table to determine if you can remove dependencies is that if some process is run once per month, or less frequently, you can REALLY easily miss that dependency and break a process down the line. Reviewing the code is the only 100% safe way to do what I think you are trying to do.

     

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

  • Thank you Brian for the detailed reply, here we want to track updates on three tables on Selective columns , may be Change Tracking is the best solution, and we need to remove the dependence on those columns in the next iteration, so those tables are not used, again this is for a 3 million row table, Triggers I know suck at performance!

  • If memory serves, CDC is not trivial to set up and it can be a performance suck as well.

    My opinion is still that targeting the stored procedure or application code is going to give you the best results with the least chance of something breaking. CDC, if I remember right, will not tell you what triggered the data to change, just that the values changed. Reviewing the stored procedures and application code will tell you exactly what caused the change. If you want to know frequency of the stored procedure usage, create a table with 2 columns - stored procedure name and timestamp, then update the stored procedures that use those 3 tables to write to the new table as the first step in the stored procedure call. Create the table as a heap and you shouldn't impact performance in any noticeable way until you go to pull data out, and at that point, just make sure to set the transaction isolation level nicely or use the NOLOCK hint. You could even capture who ran the stored procedure.

    The risk with only knowing that the data changed is you don't know for certain what caused the change or who and without that information, how do you know what needs to be modified to remove the dependency?

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

  • Triggers don't necessarily suck at performance, it depends on how they are written.  Neither does CDC perform all that poorly; in fact, I think it's extremely unlikely you could write your own code that would perform overall as well.

    SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".

  • sqlguy80 wrote:

    I need to track changes when a column is updated on two tables.

    Is there a way to do it. please send code for that.

    Yes... there's a super easy way to do this.  Please send money for a request like that. 😉

     

     

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Have a look at UPDATE triggers with #inserted #deleted to compare differences

    https://stackoverflow.com/questions/19737723/log-record-changes-in-sql-server-in-an-audit-table

    there are gotcha's if you use IF UPDATE(...)

    The UPDATE() function does not discriminate whether the value has changed or not, only that the column was updated. To do this, you should compare the values in the inserted and deleted pseudo-tables.

    https://stackoverflow.com/questions/10181268/sql-server-trigger-updatecol-true-or-false-if-inserted-value-equals-original

  • @scottpletcher I do agree that triggers and CDC are not necessarily performance killers, but they do impact performance. I haven't set up CDC, so I am not sure what the performance hit is like, and with triggers the performance hit is USUALLY light if the trigger is written well. What I don't like about triggers is that they can easily become "hidden" things for updating a table. What I mean is when I am looking at what updated a table, I'm looking through stored procedures or the source code of the application (in my case C# or SSIS). 9/10 times I don't even think to check the table for triggers AND it could be a trigger on a different table is doing the update on the table I am looking at. Plus if the trigger is updating a row or inserting stuff elsewhere and I don't have NOCOUNT set to ON, it can be a bit alarming to do an insert/update/delete operation and see "1 row updated" followed by a second line of "1 row updated", or worse - the second one is larger than 1!

    I just find that triggers end up "hiding" some of the work going on. Plus, I have some triggers that cause deadlocks more often than I care to admit. Not like "1 per day" or anything crazy like that, but maybe once a month I see a deadlock caused by a trigger I set up. I would address them, but the application is being retired VERY soon so there isn't much point in fixing it.

    BUT my opinion is still that if you are looking to remove dependencies, the ideal way to do it is to go through the code to determine how everything is used. If that isn't an option, I would be hesitant to make changes that could break things. That being said, if you convert the table to a view, then you can do whatever you want to the table as long as you make sure to not break the view.

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply