October 21, 2002 at 10:22 am
Current project requires that all changes to the database be auditable and must be kept for a period of seven years.
In the past, I've primarily used Triggers for this purpose, which works fine with the following issues:
1) Since all access is through sprocs, the owner is always shown as DBO. That is not acceptible in this case as the customer requires to know the user that made the change.
2) Space- The trigger methodology ends up creating a second database with all of the tables to track the inserts. Deletes are being done via a bit field in the original table vs actually removing them. Most updates are quite minor, with only one or two columns being modified. Keeping a copy of the entire record is a bit, inefficient to say the least. I would much prefer to keep only those columns that changed.
With the above two issues in mind, does anyone have any suggestions as to a good / better way to handle this? Is there a way to pass the user ID to the Trigger? (User_Name always returns DBO). If so, is it practical to write the Update trigger to compare each column and only save those that changed? (iterating through each column using 'IF UPDATE' ) Even if not practical, the ability to get the user ID to the Trigger would be a huge help and allow the use of an INSTEAD OF delete trigger. From what I've read, there is no way to pass a parameter into the Trigger, if correct that sure seems like an oversight.
This project uses ADO.Net Recordsets so in the dataaccess layer we have the ability to easily determine what has changed. One consideration is to use a log methodology for the entire DB. The middle tier would determine what has changed, then store all table changes in one table, with one column being the table and the change as a varchar. This could be either a fully descriptive string such as "123 Main St [to] 145 Main St", or simply the prior column data, with the understanding that reporting would always work backward from current. While this puts more processing in the mid tier, testing shows that not to be an issue.
We are looking for any suggestions that anyone may have, including comments on what others may have tried but didn't work and why.
October 21, 2002 at 10:42 am
I have a similar arrangement on a few key tables and then a selection of columns that my client wanted audited. I added 2 columns to the table, one for the initials of the person making the change, and the other a datetime stamp in char format. I write these straight to the auditing table which just has a few columns for the old value, new value, who and when. This works for them, but may not for you.
Simon, UK
(looking for development/DBA jobs...)
October 21, 2002 at 10:58 am
I'd be reluctant to rely on the data tier - sooner or later someone will bypass it to do direct updates for whatever reason. Nope, you can't pass params in. What you do have is the spid, so you just need a way to tie the spid to the user - easily done my adding a row to a table each time someone connects. I also include hostname, that way I know where they were even if I dont know who. Trying to determine the changes in the proc can get intensive if they are modifying a lot of rows. I dont have the timespan requirements you do so I generally save the entire row.
You might look at meeting the requirements by using a server side trace to capture all activity.
Andy
October 21, 2002 at 3:28 pm
We have full auditing on tables, and only store the changed data. Peformance is not really an issue because we have very narrow tables.
The triggers are generated from a script to standardize everything.
As for the username, 2 options,
1. use SETUSER which requires each user having an account.
2. Or have a table with connection details, one row per spid. this is then updated when a connection is made in your data layer. The trigger can then access this. If the app connecting is not your app i.e some one using QA you could use the info from sysprocesses.
3. One word or warning on the app solution, if you do not use a front end that are bound to the data set i.e the data grid. Then when you update the data set you have to make sure that you are not updating the column to the value it already has, because this causes the dataset to think the data has changed.
Space will always de an issue.
Simon Sabin
Co-author of SQL Server 2000 XML Distilled
http://www.amazon.co.uk/exec/obidos/ASIN/1904347088
Simon Sabin
SQL Server MVP
http://sqlblogcasts.com/blogs/simons
October 22, 2002 at 3:28 pm
Thank you for your replies.
I had not considered using the SPID to track back to my loginname, that will resolve that issue.
For anyone else that may read this thread and is facing this same issue, I found an article on this topic at: http://www.inquiry.com/techtips/thesqlpro/10min/10min0900.asp
This article also mentions a commercial product SQLAudit that can automate this process.
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply