July 13, 2006 at 11:59 am
Comments posted to this topic are about the content posted at http://www.sqlservercentral.com/columnists/dziffer/creatingagenericaudittriggerwithsql2005clr.asp
August 2, 2006 at 7:42 am
I have to say that, as far as database design and good coding practices go, this solution is definitely not something that I would recommend. Your idea of having a single audit table with Row-Id (1) works only for single-column primary keys, and (2) would result in HUGE overhead for inserts, updates, and deletes whenever more than one column is affected. If I insert one row into a table that has 80 columns, your trigger would add the additional overhead of inserting 80 rows, one per column. No way would a well-educated DBA allow or encourage this to occur.
This type of coding/behavior is a more OO approach to database development and I've seen a lot of inexperienced-database-developers-who-are-good-application-developers design such tables. OO to Relational mapping is hard, no question, but a relational database is not object-oriented and, when you try to make it such (as with this generic design), you are causing more headaches than creating an audit table and trigger for each table. IMO, this is an example of the reasons most DBAs will not allow CLR code to be executed on their server. This is just an example of a .NET programmer doing something in .NET that should be done in T-SQL.
Scott Whigham
LearnSqlServer.com - tutorials for SQL Server 2005 and SQL 2000
August 2, 2006 at 8:17 am
Totally agree with Scott on this one.
What happens if you delete 250 000 rows from a table with 80 columns???
How do you recreate the whole history of a single row with a select query??? (All the columns of the row at each version of its life).
BTW I didn't read the whole thing but I seems you are having problems identifying the base table name from within the trigger. Maybe you can use this code somehow to extract the basetable name (from sql 2000).
Select Object_name(Parent_obj) as BaseTableName from dbo.SysObjects where id = @@PROCID
Where @@PROCID is the id of the trigger in sysobjects. This variable is available and set within the trigger and any other procedure for that matter.
August 2, 2006 at 8:33 am
Agree. Stick to VB.
It does give some good insight into .NET, tho.
August 2, 2006 at 9:52 am
I would have gotten more out of the note if the lines didn't extend past the limits of my fully expanded browser. Recommend either not preformatting the text or using a more reasonable line width.
August 2, 2006 at 10:01 am
My god... I have to agree with the previous posts. This type of trigger is a wholly bad idea. Besides the massive amounts of overhead in splitting each column out from the table for each update, there will also be the contention between tables as evey table in the database attempts to write to a single audit table.
And as RGR mentioned, how do you get this data back out overly complex SQL queries?
August 2, 2006 at 10:46 am
Any of you nay sayers (not that you are wrong) have any better plans? What do YOU suggest?
August 2, 2006 at 11:14 am
A solution to this problem has already been created on this site. Search the script section with the keywords triggers and (generated, concatenated... sorry don't remember the name).
Basically it's a script that creates an audit table for each table in the system. It also creates the trigger that audits the data. I don't remember if it creates only the code or if it generates the objects as well. But it was working great as far as I remember.
August 2, 2006 at 12:38 pm
I have a template that I put together for CodeSmith that I use. It works for a single or multiple tables. Though I should probably update it for SQL Server 2005 to possibly use an audit schema for the audit tables.
August 2, 2006 at 1:06 pm
I don't think it's a good idea to audit ALL tables in the database.
For the most important tables (which usually are small so the overhead is minimal), I use the following solution (SQL 2005):
The logging table (only one for the entire database):
create table DMLLogs(
DMLLogID int identity primary key,
TableName nvarchar (128),
DateCreated datetime DEFAULT (getdate()),
OldValues xml NULL,
NewValues xml NULL)
The trigger for the table which you want to audit; The only thing that needs to be changed for another table is the table name. You can use the solution presented early in this post:
Select @TableName = Object_name(Parent_obj)
from dbo.SysObjects
where id = @@PROCID;
in this case the trigger will be exactly the same for all audited tables
create trigger MyTable_T_Log on MyTable
for insert, update, delete
as
declare @i xml, @d xml
set @i = (select * from inserted for xml auto)
set @d = (select * from deleted for xml auto)
exec LogDMLEvent
@TableName = 'MyTable',
@Deleted = @D,
@Inserted = @I
The procedure LogDMLEvent is:
create procedure LogDMLEvent
@TableName sysname,
@Deleted xml,
@Inserted xml
as
if @Deleted is not null or @Inserted is not null
insert into DMLLogs (TableName, OldValues, NewValues)
values (@TableName, @Deleted, @Inserted)
you can include in this procedure (and in the DMLLogs table) also the user name, connection info ....
So in the logging table you will have xml representations of inserted and deleted tables. You can very easy put them into a view to mimic the original table structure (one view for each table).
August 2, 2006 at 1:49 pm
How do you scan this table to recreate the history of a row(s)? Or to rollback a group of transaction?
August 2, 2006 at 3:16 pm
We can argue on any solution for Audit. Whether to create a history table or create a generic table with information about only the data changed and do not worry about the deletes (or design the table in a way whether their is no physical delete but just logical ones).
What I have learned from experience we cannot have BEST Pratice design or coding style as it changes drastically from project to project. Just go with the follow.
I will still thank the Author to have his idea on the table.
Amit Lohia
August 2, 2006 at 4:03 pm
It's possible to see how the data in the table looked at a previous moment.
The table being defined as:
create table MyTable (
a int not null primary key,
b int,
c int,
description varchar (100)
)
The function MyTable_History will return the data from the table as existed at the @Date moment.
create function MyTable_History (@Date datetime)
returns @r table (a int primary key, b int, c int, Description varchar (100))
as
begin
insert into @r (a, b, c, Description)--make a copy of the table.
select a, b, c, Description
from MyTable
declare @inserted xml, @deleted xml
--We will apply all the changes made to the table in the reversed order
declare c cursor local for
select OldValues, NewValues
from DMLLogs
where DateCreated >= @Date
order by DMLLogID desc
open c
fetch next from c into @Deleted, @Inserted
while @@fetch_status = 0 begin
if @Inserted is not null and @Deleted is not null --update operation
update @r set
b = T.c.value ('@b', 'int'),
c = T.c.value ('@c', 'int'),
Description = T.c.value ('@Description', 'varchar (100)')
from @r R inner join @Deleted.nodes ('deleted') T(C)
on R.a = T.c.value ('@a', 'int')
if @Deletedis null --insert operation; must be deleted
delete from @r
from @r R inner join @Inserted.nodes ('inserted') T(C)
on R.a = T.c.value ('@a', 'int')
if @Inserted is null --delete operation; must be inserted back into the table
insert into @r (a, b, c, Description)
select T.c.value ('@a', 'int'),
T.c.value ('@b', 'int'),
T.c.value ('@c', 'int'),
T.c.value ('@Description', 'Varchar (100)')
from @Deleted.nodes ('deleted') T(C)
fetch next from c into @Deleted, @Inserted
end
close c
deallocate c
return
end
go
The shortcoming is that this function does not handle correctly cases where the primary key (column a) was changed as a result of a update operation.
Also it will be very slow on large tables. For those it will be more efficiently to directly update the table, as the following procedure does:
create procedure Recover_MyTable
@Date datetime
as
begin tran
declare @inserted xml, @deleted xml
declare c cursor local for
select OldValues, NewValues
from DMLLogs
where DateCreated >= @Date
order by DMLLogID desc
open c
fetch next from c into @Deleted, @Inserted
while @@fetch_status = 0 begin
if @Inserted is not null and @Deleted is not null --update operation
update MyTable set
b = T.c.value ('@b', 'int'),
c = T.c.value ('@c', 'int'),
Description = T.c.value ('@Description', 'varchar (100)')
from MyTable R inner join @Deleted.nodes ('deleted') T(C)
on R.a = T.c.value ('@a', 'int')
if @Deletedis null --insert operation; must be deleted
delete from MyTable
from MyTable R inner join @Inserted.nodes ('inserted') T(C)
on R.a = T.c.value ('@a', 'int')
if @Inserted is null --delete operation; must be inserted back into the table
insert into MyTable (a, b, c, Description)
select T.c.value ('@a', 'int'),
T.c.value ('@b', 'int'),
T.c.value ('@c', 'int'),
T.c.value ('@Description', 'Varchar (100)')
from @Deleted.nodes ('deleted') T(C)
fetch next from c into @Deleted, @Inserted
end
close c
deallocate c
commit tran
go
You can test the procedure using the following code:
begin tran
go
--load some data into the table
insert into MyTable (a, b, c, description)
select 1, 2, 3, 'first inserted'
union all
select 2, 3, 4, 'inserted'
waitfor delay '0:00:01' --wait a second because SQL has only 3 ms time resolution and we risk to have the same timestamp
declare @d datetime set @d = getdate() --store the time
waitfor delay '0:00:01'
-- change the data
insert into MyTable (a, b, c, description)
select 100, 2, 3, 'inserted'
union all
select 200, 2, 3, 'inserted'
update mytable set b = 5
delete from mytable where a = 2
select * from MyTable --watch the current data
exec Recover_MyTable @d --recover the old data
select * from MyTable --the changes are undone
go
rollback
That's it.
PS. Does anyone know how to insert tabs in the message ?
August 2, 2006 at 7:35 pm
When I read this article, it seemed like an absolutely correct concept for METADAT tables. I think it grossly inefficient for large DATA tables, hoewever.) Except:
1. It never showed either the original inserted or complete deleted record. Fields are shown only for updates. Adding the fields for deletes shouldn't put too much burden on the system on tables that have a small portion of deletes.
2. Assembling views of a record at any point in time would take some (clr) programming, but the logic could be built once and used for all tables.
3. Similarly, the difficult logic of using an audit trail to rollback a table to some pervious state could be programed just once for all audited tables.
4. It simplifies the task of identifying all the changes made by an individual during a time frame regardless of the table involved. I think this single source is a good auditing tool.
My biggest problem, however, is that I can't make it work. WHen I try to compile just the shell of the code, I find the compiler will not allow me to skip the Target attribute in
<Microsoft.SqlServer.Server.SqlTrigger(Name:="AuditCommon", Event:="FOR UPDATE, INSERT, DELETE")> _
I wonder if this article, which was originally written before SQL2005 was released, is now out of date and MS no longer allows it. It seems the author anticipated this when he wrote:
''' Note that the "SqlTrigger" attrbute does not contain one of its normal tags; namely it does not
''' specify any particular table. We don't know if it is Microsoft's intention to allow table-agnostic
'''trigger code, but this works and we hope that it keeps working.
August 4, 2006 at 8:15 am
Stan Hargrove: Thanks for trying out my solution. I don't understand why you had a problem with compilation of the missing Target attribute though. I compiled the code in the article in July 2006 just before publishing it in SqlServerCentral, and it worked just fine. My version is:
Microsoft Visual Studio 2005
Version 8.0.50727.42 (RTM.050727-4200)
Microsoft .NET Framework
Version 2.0.50727
Installed Edition: Professional
Microsoft Visual Basic 2005 77626-009-0000007-41371
Microsoft Visual Basic 2005
- The author
Viewing 15 posts - 1 through 15 (of 34 total)
You must be logged in to reply to this topic. Login to reply