March 27, 2023 at 12:00 am
Comments posted to this topic are about the item Capturing Data Changes
March 27, 2023 at 6:52 am
This was removed by the editor as SPAM
March 27, 2023 at 11:37 am
I prefer to use triggers. They do have to be written carefully, but if done properly capture changes well. Nulls are trickier to handle when updating data as they are automatically present with inserts and deletes and it can be difficult or impossible to distinguish what is going on.
March 27, 2023 at 12:39 pm
I too am a fan of triggers (only for auditing). A long time ago I wrote an article to automate the creation of audit tables and audit triggers.
https://www.sqlservercentral.com/articles/an-auditing-solution-with-xml-and-xsl
A second "fun proof of concept" article uses a trigger to generate "rollback and roll forward" code for each data modification.
These articles are both maybe 10 years old, so possibly a little outdated.
March 27, 2023 at 1:13 pm
I don't have to implement data change capture very often, but when I do I put in right in the Proc making the update, either with the OUTPUT clause or simply a second INSERT statement into an audit table. Granted if someone changes data outside the Proc, those changes will be lost, but the chances of that can be reduced with proper permissions.
I loathe triggers and will only use them if there is absolutely no other way to accomplish the task. We currently struggle with performance issues caused by triggers added by my non-DBA predecessors.
Be still, and know that I am God - Psalm 46:10
March 27, 2023 at 1:25 pm
I recently added audit triggers that had some degree of complexity. To test the performance one of the developers issued a million inserts in a little over an hour--far beyond what would be the expectation in production. Everything worked as expected. I've heard about performance issues with triggers, but I haven't seen them for audit triggers. I have seen them for triggers that do work that should be done by the application.
March 27, 2023 at 2:15 pm
Where I am we tend to use OUTPUT INTO clauses to create rollback tables and preserve data.
March 27, 2023 at 2:24 pm
I recently added audit triggers that had some degree of complexity. To test the performance one of the developers issued a million inserts in a little over an hour--far beyond what would be the expectation in production. Everything worked as expected. I've heard about performance issues with triggers, but I haven't seen them for audit triggers. I have seen them for triggers that do work that should be done by the application.
A million inserts. Was that indicative of your actual workload? Was it a million inserts done sequentially, or all at once? Inserting row after row may not be a valid test.
You are comfortable with triggers, and they may certainly work very well for your environment. But, they may not be the right tool for the job.
We have to capture data changes on a variety of systems. Triggers would fail miserably on a few of them. We've covered our needs with CDC, temporal tables, triggers, output clauses, and so forth.
Michael L John
If you assassinate a DBA, would you pull a trigger?
To properly post on a forum:
http://www.sqlservercentral.com/articles/61537/
March 27, 2023 at 4:33 pm
I don't have to implement data change capture very often, but when I do I put in right in the Proc making the update, either with the OUTPUT clause or simply a second INSERT statement into an audit table. Granted if someone changes data outside the Proc, those changes will be lost, but the chances of that can be reduced with proper permissions.
I loathe triggers and will only use them if there is absolutely no other way to accomplish the task. We currently struggle with performance issues caused by triggers added by my non-DBA predecessors.
This is my view.
March 27, 2023 at 5:58 pm
I've used INSERT.. DELETE.. OUTPUT.. in the past to implement archival of log tables. A daily scheduled job simply deletes where datetime < X days, and the deleted rows are inserted into another table. That I like about this is that the DELETE and INSERT are contained in one atomic transaction.
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
March 27, 2023 at 9:15 pm
Our application uses a few different databases and the auditing is accomplished by the application layer first creating a distributed transaction, then performing the update in one database, then performing a diff of the object, and then writing the audit data to another database. Lastly, committing the distributed transaction.
April 1, 2023 at 9:54 pm
I have used this approach to insert lists of records that were deleted or updated and it is also a good way to write scripts that insert into multiple tables at once where the primary ID value is generated and otherwise not known but some other reference is then it can be inserted into a temp table and joined to the next insert to provide the ID value.
April 1, 2023 at 9:59 pm
I have used this approach to insert lists of records that were deleted or updated and it is also a good way to write scripts that insert into multiple tables at once where the primary ID value is generated and otherwise not known but some other reference is then it can be inserted into a temp table and joined to the next insert to provide the ID value.
. . . . .
Michael L John
If you assassinate a DBA, would you pull a trigger?
To properly post on a forum:
http://www.sqlservercentral.com/articles/61537/
Viewing 13 posts - 1 through 12 (of 12 total)
You must be logged in to reply to this topic. Login to reply