May 17, 2012 at 6:05 am
Hi All,
We use sql server 2005 and 2008. I need to trace INSERTED,DELETED and UPDATED records in some tables in this manner.
- LogID (auto increment)
- TableName
- LogType (Insert,Delete,Update)
- LogDesc
(Inserted Cust_ID = 001, Cust_Name = John etc....)
(Deleted Cust_ID = 002)
(Updated Cust_Name From John to David)
- LogDateTime
- LogUser
I've searched but no luck, samples only shows duplicate table of the audited table. But i only need ONE table for all logs
Please share your ideas. Thanks.
May 17, 2012 at 6:41 am
Format the data into an XML block, insert into a log table with an XML column. You can do that in a trigger pretty easily.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
May 17, 2012 at 8:26 am
XML is a great way to genericize a payload in this instance since it will vary from one table to the next. Logging all DML operations to a single table though? Talk about a potential bottleneck.
You may want to think about employing some Service Broker queues to spread out the load and help alleviate some of the issues you'll no doubt experience on a system with a high transaction volume, or even low transaction volume with high concurrency on some of your tables.
There are no special teachers of virtue, because virtue is taught by the whole community.
--Plato
May 17, 2012 at 9:12 am
Third vote for XML here.
What you do need to do is consider what happens when someone inserts multiple records. An XML recordset can handle "inserted cust1", "inserted Cust2" as a document or fragment.
May 18, 2012 at 8:57 am
I once built a whole audit system for a database using For XML. Do the columns as "NullIf(Deleted.MyColumn, Inserted.MyColumn) as MyColumn", in the Select, and you very cleverly get just the columns that were actually changed, which saves a lot of storage overhead for the audit log, if you have a lot of small updates that really only hit one or two columns in a whole table.
One of the cool things about it is you can actually have a relatively simple DDL trigger that will update the logging trigger if you change any of the columns in a table you are auditing. You don't have to remember to modify your triggers if you add a column or anything like that. Pretty slick if you do it right.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
May 18, 2012 at 2:05 pm
rjs123431 (5/17/2012)
Hi All,We use sql server 2005 and 2008. I need to trace INSERTED,DELETED and UPDATED records in some tables in this manner.
- LogID (auto increment)
- TableName
- LogType (Insert,Delete,Update)
- LogDesc
(Inserted Cust_ID = 001, Cust_Name = John etc....)
(Deleted Cust_ID = 002)
(Updated Cust_Name From John to David)
- LogDateTime
- LogUser
I've searched but no luck, samples only shows duplicate table of the audited table. But i only need ONE table for all logs
Please share your ideas. Thanks.
can you please provide some ideas on volume of transactions and frequency that require auditing
________________________________________________________________
you can lead a user to data....but you cannot make them think
and remember....every day is a school day
May 18, 2012 at 5:17 pm
And someone please post one of the XML triggers. One of you 3 folks recommending this should have one right at your fingertips, yes? I'd like to see what's going on there. I'd like to take a look and do some comparisions "normal" triggers.
--Jeff Moden
Change is inevitable... Change for the better is not.
May 18, 2012 at 5:21 pm
rjs123431 (5/17/2012)
Hi All,We use sql server 2005 and 2008. I need to trace INSERTED,DELETED and UPDATED records in some tables in this manner.
- LogID (auto increment)
- TableName
- LogType (Insert,Delete,Update)
- LogDesc
(Inserted Cust_ID = 001, Cust_Name = John etc....)
(Deleted Cust_ID = 002)
(Updated Cust_Name From John to David)
- LogDateTime
- LogUser
I've searched but no luck, samples only shows duplicate table of the audited table. But i only need ONE table for all logs
Please share your ideas. Thanks.
The problem with the contents of your "LogDesc" column is that you lose the datatype if you ever want to reassemble the rows. Are you sure you actually want to do it that way? If datatype of the data (not to mention performance during inserts into the log table) has some bearing, you might want to look into the SQL_VARIANT datatype. I've written audit triggers with it that are so fast it's like the audit trigger isn't even there.
--Jeff Moden
Change is inevitable... Change for the better is not.
May 18, 2012 at 10:59 pm
Jeff Moden (5/18/2012)
And someone please post one of the XML triggers. One of you 3 folks recommending this should have one right at your fingertips, yes? I'd like to see what's going on there. I'd like to take a look and do some comparisions "normal" triggers.
I don't have any on my current job, but what I was talking about is the same one you and I discussed in the threads on my articles on the subject a couple of years ago.
Something like:
create trigger MyAuditTrigger on dbo.MyTable
on update, delete
as
set nocount on;
insert into dbo.MyAuditLog (LogData)
select (select nullif(deleted.col1, inserted.col1) as Col1, nullif(deleted.col2, inserted.col2) as Col2
from deleted
left outer join inserted
on deleted.ID = inserted.ID
for XML auto, type) ;
That's me typing from memory, and isn't tested, formatted, etc. Just a sample of what it would look like.
You'd probably want a datetime or datetime2 column in the log with a timestamp on it, but that's easy enough to add to this, or use a default constraint and skip it in the trigger.
You can also get some use out of including a rowversion datatype in the audit log for each row updated. Just add that to the XML. It'll never be the same in an update, and deletes capture all columns in the XML, so it's easy to keep in there.
Because XML doesn't keep null columns unless you tell it to, not telling it to, keeps the XML down to what was actually changed. Saves significant space in logging tables that get lots of narrow updates.
For tables that mostly get lots of many-column updates, the XML overhead can actually be a storage liability, of course, since it has to include column names in it. There are ways to mitigate that, as with any XML, but it makes the auditing more complex.
The main advantage is you can audit whatever tables you feel need it, and it's easy to add a table to the log without having to create a parallel table. Can easily avoid the "oops, I added a column to the table, but didn't add one to the audit table", that can happen if you're using the auditing method of having two parallel tables. Can even set up DDL triggers to capture changes to the logged tables and modify the triggers accordingly, in your dev environment of course. You'll still want to migrate the modified triggers to production using your normal methods and checks.
Easier, in my experience than having an audit table with TableName, ColumnName, and OldValue (sql_variant datatype) columns. Those can be more space efficient, but the triggers take longer to run, and reconstituting the data into rows is a pain from those, and dead easy from XML.
So, your mileage may vary, as usual with anything SQL Server.
That what you're looking for?
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
May 19, 2012 at 12:48 am
Thanks for your reply but i don't want to use XML data type.
What I want on my LogDesc are those fields that were updated, the old value of that field and the new value.
BTW, i tried using XML like (select * from deleted) as oldvalue for xml auto, but i don't want to do it this way.
May 19, 2012 at 7:33 am
GSquared (5/18/2012)
Jeff Moden (5/18/2012)
And someone please post one of the XML triggers. One of you 3 folks recommending this should have one right at your fingertips, yes? I'd like to see what's going on there. I'd like to take a look and do some comparisions "normal" triggers.I don't have any on my current job, but what I was talking about is the same one you and I discussed in the threads on my articles on the subject a couple of years ago.
Something like:
create trigger MyAuditTrigger on dbo.MyTable
on update, delete
as
set nocount on;
insert into dbo.MyAuditLog (LogData)
select (select nullif(deleted.col1, inserted.col1) as Col1, nullif(deleted.col2, inserted.col2) as Col2
from deleted
left outer join inserted
on deleted.ID = inserted.ID
for XML auto, type) ;
That's me typing from memory, and isn't tested, formatted, etc. Just a sample of what it would look like.
You'd probably want a datetime or datetime2 column in the log with a timestamp on it, but that's easy enough to add to this, or use a default constraint and skip it in the trigger.
You can also get some use out of including a rowversion datatype in the audit log for each row updated. Just add that to the XML. It'll never be the same in an update, and deletes capture all columns in the XML, so it's easy to keep in there.
Because XML doesn't keep null columns unless you tell it to, not telling it to, keeps the XML down to what was actually changed. Saves significant space in logging tables that get lots of narrow updates.
For tables that mostly get lots of many-column updates, the XML overhead can actually be a storage liability, of course, since it has to include column names in it. There are ways to mitigate that, as with any XML, but it makes the auditing more complex.
The main advantage is you can audit whatever tables you feel need it, and it's easy to add a table to the log without having to create a parallel table. Can easily avoid the "oops, I added a column to the table, but didn't add one to the audit table", that can happen if you're using the auditing method of having two parallel tables. Can even set up DDL triggers to capture changes to the logged tables and modify the triggers accordingly, in your dev environment of course. You'll still want to migrate the modified triggers to production using your normal methods and checks.
Easier, in my experience than having an audit table with TableName, ColumnName, and OldValue (sql_variant datatype) columns. Those can be more space efficient, but the triggers take longer to run, and reconstituting the data into rows is a pain from those, and dead easy from XML.
So, your mileage may vary, as usual with anything SQL Server.
That what you're looking for?
Nope. That's even better than what I was looking for. Thanks for the information, Gus. I really appreciate the time.
--Jeff Moden
Change is inevitable... Change for the better is not.
May 19, 2012 at 8:13 am
rjs123431 (5/17/2012)
Hi All,We use sql server 2005 and 2008. I need to trace INSERTED,DELETED and UPDATED records in some tables in this manner.
- LogID (auto increment)
- TableName
- LogType (Insert,Delete,Update)
- LogDesc
(Inserted Cust_ID = 001, Cust_Name = John etc....)
(Deleted Cust_ID = 002)
(Updated Cust_Name From John to David)
- LogDateTime
- LogUser
I've searched but no luck, samples only shows duplicate table of the audited table. But i only need ONE table for all logs
Please share your ideas. Thanks.
Take a look at the example trigger code that G-Squared posted. You'll need something similar to that (without the XML) for every "qualified" column in the table. Of course, you'll need to do some concatenation, as well. With or without XML, that can slow you down a fair bit but not as badly as a supposed "generic" trigger.
Don't make the mistake of trying to make or use any form of generic audit trigger because that requires making a copy of the INSERTED/DELETED tables because of scope limitations. Combnine that with all the dynamic stuff you need to add in to find out what changed, etc, etc, and you end up with some really slow code which is either memory intensive or Temp DB intensive. I'm replacing some generic SQLCLR audit triggers that some folks put on the main tables at work because they take 4 minutes to process a tiny 10,000 row change of just 4 columns. The new triggers I've written run almost instantaneously. And no, there's no contention on the audit table even though I'm writting hundreds of thousands of rows from each of 8 tables.
Since I don't want generic triggers, I wrote a stored procedure to create the Audit trigger for a named table.
Why don't I just post a copy of one of those triggers or the trigger generation proc? Because they don't do what you want them to. I'm pretty sure that you're going to have to write a custom version of all this. I'd do it but I just don't have the time to do it for you. I don't mind helping with any questions you might have along the way, though.
--Jeff Moden
Change is inevitable... Change for the better is not.
May 19, 2012 at 9:46 am
J Livingston SQL (5/18/2012)
rjs123431 (5/17/2012)
Hi All,We use sql server 2005 and 2008. I need to trace INSERTED,DELETED and UPDATED records in some tables in this manner.
- LogID (auto increment)
- TableName
- LogType (Insert,Delete,Update)
- LogDesc
(Inserted Cust_ID = 001, Cust_Name = John etc....)
(Deleted Cust_ID = 002)
(Updated Cust_Name From John to David)
- LogDateTime
- LogUser
I've searched but no luck, samples only shows duplicate table of the audited table. But i only need ONE table for all logs
Please share your ideas. Thanks.
can you please provide some ideas on volume of transactions and frequency that require auditing
Hello again
I use a trigger to audit (primarily) updates on "standing details" tables,,,such as products, customers etc...eg non transactional tables
the trigger is created against each table that requires to be audited and posts to a single audit table....the audit table records the table name/PK/coll_name/oldvalue/newvalue/user/time
changes to the base tables do not require any update to the trigger and it records multiple
updates
in a slow changing environment with low concurrency this works fine for me...
...however to give you some metrics on performance:
1million row table with a four column update....without trigger takes 2.5secs...with trigger takes 50 secs
happy to share the code (not originally mine) if you are interested.
kind regards
________________________________________________________________
you can lead a user to data....but you cannot make them think
and remember....every day is a school day
May 19, 2012 at 11:31 pm
You don't need to record the "new value" for a valid audit log. That's in the table and can be retrieved from there. Save yourself some storage space, some CPU cycles per trigger event, etc., and just log old values.
Don't need to log inserts either, for the same reason. They're already logged, in the table you just inserted into. After all, if you insert Joe Smith into your dbo.People table, you can just Select * From dbo.People to get the data on Joe Smith.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
May 20, 2012 at 5:55 pm
J Livingston SQL (5/19/2012)
1million row table with a four column update....without trigger takes 2.5secs...with trigger takes 50 secshappy to share the code (not originally mine) if you are interested.
A. I always like to see other's code.
B. I don't know for sure but if you include the DDL for the audit table, maybe we can all science out a speed up together.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 15 posts - 1 through 15 (of 15 total)
You must be logged in to reply to this topic. Login to reply