August 4, 2006 at 8:31 am
To everyone complaining about the specifics of the AUDIT table in this article: you are missing the point. It is not my intention to suggest that this is the ideal format for an audit table to be used in every application. I am not recommending that you use single-field autoincrement keys, nor do I prohibit you from adding code to capture all the "insert" data if you want to be able to reconstruct complete records, nor do I recommend this sort of row-per-column-change table for high traffic situations. With a few hours of effort you can obviously convert this code to support almost any kind of audit table that you want. I had to choose SOME sort of audit table for the purpose of illustration; obviously any choice could have been subject to the same sorts of criticisms.
The purpose of the article is to demonstrate a capability of the CLR. Since triggers require no parameters, you can dispense with the tedious business of writing a code generator to generate a custom SQL trigger for every single table (or, heaven forbid, writing such code by hand). Instead of miles and miles of SQL, you can now implement a trigger for each table in your app with just 3 lines of mindless SQL code.
Yes I realize you still must write code generators (or, heaven forbid, write code by hand) to create SQL for all other CRUD operations. That's unfortunate, but Rome wasn't built in a day. I am hoping that this article will demonstrate to people at Microsoft that the CLR, if enhanced, can perhaps be used to eventually replace all of the enormous CRUD code that most of us have to generate.
Thanks for reading and contributing! - the author
August 4, 2006 at 8:39 am
Well, perhaps my problem is that I converted the code to C# and modified it to meet our specific needs. But I slashed it to the simplest code and just tried to execute the highlighted statement and still got the message. Is C# more rigorous in it's testing of required paramenters than VB.Net? (btw. the documentation says the "target" is required.)
Perhaps I'll give it a try in VB to see if that's the difference. If so, I'd be nervous about Microsoft's intent and the stability of the aproach.
August 4, 2006 at 8:55 am
Stan Hargrove: Rather than convert back to VB, you could also try just choosing some arbitrary table name (real or perhaps even nonexistent) and use that. The CLR might not require the table to exist at compile time, and even if it did, you could just choose a real table name in each different app. I would also guess that the Target attribute is used purely for deployment purposes, i.e. it shouldn't change the behavior of this code. And as I point out in the article, even when the compiler accepts the VB code with the missing Target attribute, the deployment features obviously won't work for a table-agnostic solution like this one anyway.
With regards to the stability of the solution. In 2005 I attended PDC and had a chance to talk to some of Microsoft's database people, who thought this concept was "cool". I pointed out both of the problems mentioned in my two "tech notes" (the other problem being the failure of the CLR to populate the TableName on the Inserted and Deleted tables). They didn't make any promises, but I take their interest as a hopeful sign that they will change the CLR to enhance the CLR's table-agnostic capabilities rather than eliminate them.
August 22, 2006 at 10:55 am
Here is a simple script that will create the audit trigger for all user tables in your database. Obviously, if you want to not do it for all tables, modify the script accordingly... Thanks David for the example. In my case, the project Im using this on needs *every* change logged even if I don't know who the user is so this is useful to me.
-- Associate the generic CLR trigger with all user tables that aren't the audit
-- table and are not the mysterious dtProperties table which is labeled as a user
-- table as well...
declare @table varchar(128)
declare curTables cursor for
select name from sysobjects where xtype = 'U' and name != 'Audit' and category != 2
open curTables
fetch next from curTables into @table
while @@fetch_status = 0
begin
declare @sql varchar(1024)
declare @triggername varchar(128)
set @triggername = 'Audit_' + @table
if exists(select name from sysobjects where name = @triggername and xtype = 'TA')
begin
--drop the trigger
set @sql = 'drop trigger ' + @triggername
print @sql
execute(@sql)
end
set @sql = 'create trigger ' + @triggername + 'on ' + @table + ' for insert, update, delete as external name [AuditCommon].[AuditCommon.Triggers].AuditCommon'
print @sql
execute(@sql)
fetch next from curTables into @table
end
close curTables
deallocate curTables
May 12, 2008 at 8:48 am
TECH NOTE 2: The CLR trigger routine relies on a field naming convention to extract the target table name because the "TableName" fields (i.e. DataTable.TableName in the code) of the "inserted" and "deleted" trigger tables are not being properly populated by SQL Server with the name of the target table (ironically these pseudo-tables contain essentially all the information you could want to know about the target table except its name). The convention involves using the table name as part of the name of the first (key) field. Obviously if your database does not employ such a convention, this trigger routine will be unable to provide the table name in your audit table. We have been informed by Microsoft that in a future release, the table name may be provided somewhere within the trigger context, but this capability will not be present in the initial release of SQL Server 2005
Does anyone know if this has been fixed in SQL 2005 or do we have to wait until SQL 2008?
_________________________________________________________________________________SQLGeordieWeb:- Jarrin ConsultancyBlog:- www.chrisjarrintaylor.co.ukTwitter:- @SQLGeordie
September 30, 2008 at 1:55 pm
Just to provide a complete solution for everyone for George's suggestion see below. We've used this successfully and performance is excellent. For tables with binary/text columns you will need to change trigger a bit.
Hope this helps. This will only work with Sql Server 2005 and above.
CREATE TABLE [dbo].[Audit](
[AuditId] [bigint] IDENTITY(1,1) NOT NULL,
[TableName] [varchar](128) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
[DateCreated] [datetime] NULL DEFAULT (getdate()),
[UserId] [int] NULL,
[SystemUser] [varchar](80) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
[Operation] AS (case when [OldValue] IS NULL then 'Insert' else case when [NewValue] IS NULL then 'Delete' else 'Update' end end),
[OldValue] [xml] NULL,
[NewValue] [xml] NULL,
[OldBinaryValue] [varbinary](max) NULL,
[NewBinaryValue] [varbinary](max) NULL
)
CREATE TABLE [dbo].[AuditTable](
[TableName] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
[IdentityColumnName] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
[UserIdColumnName] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
CONSTRAINT [PK_AuditTable] PRIMARY KEY CLUSTERED
(
[TableName] ASC
)
create proc [dbo].[SaveAudit]
@OldValues xml,
@NewValues xml
as
Declare @TableName varchar(128), @SystemUser varchar(80), @userid int,
@ColumnName varchar(128), @Operation varchar(10), @sql varchar(1000)
SELECT @SystemUser=system_user
SELECT @TableName=object_name(resource_associated_entity_id)
FROM sys.dm_tran_locks
WHERE request_session_id = @@spid and resource_type = 'OBJECT'
Insert Into Audit (TableName, UserId, SystemUser, OldValue, NewValue)
Values (@TableName, null, @SystemUser, @OldValues, @NewValues)
go
CREATE proc [dbo].[GetAuditInfo]
@TableName varchar(128),
@StartDate datetime,
@EndDate datetime
as
set nocount on
Declare @x int, @max-2 int, @sql varchar(4000), @ColumnName varchar(200)
set @x=1
set @sql=''
SELECT @max-2=count(c.[name])
FROM sys.all_columns c
INNER JOIN sys.all_objects o ON c.object_id = o.object_id
INNER JOIN sys.types t ON c.system_type_id = t.system_type_id
WHERE o.[name] = @TableName AND o.[type] = 'U'
while (@x <= @max-2)
Begin
SELECT @ColumnName=c.[name]
FROM sys.all_columns c
INNER JOIN sys.all_objects o ON c.object_id = o.object_id
INNER JOIN sys.types t ON c.system_type_id = t.system_type_id
WHERE o.[name] = @TableName and c.column_id=@x AND o.[type] = 'U'
set @sql=@sql+'v.value(''@'+@ColumnName+''', ''varchar(8000)'') as '+@ColumnName
if @x<@max
set @sql=@sql+', '
set @x=@x+1
End
Declare @sql2 varchar(8000)
set @sql2='SELECT TableName, DateCreated, SystemUser, Operation,'+@sql+
' Into ##340834audit FROM Audit CROSS APPLY OldValue.nodes(''/deleted'') x(v)
Where TableName='''+@TableName+''''
exec(@sql2)
set @sql2=''
set @sql2='Insert Into ##340834audit SELECT TableName, DateCreated, SystemUser, Operation,'+@sql+
' FROM Audit CROSS APPLY NewValue.nodes(''/inserted'') x(v)
Where TableName='''+@TableName+''''
exec(@sql2)
select * from ##340834audit Order By TableName, DateCreated, SystemUser, Operation
drop table ##340834audit
go
And finally.....a generic audit trigger for your tables
Create trigger [dbo].[AuditYourTable] on [dbo].[YourTable]
for insert, update, delete
as
declare @i xml, @d xml
set @i = (select * from inserted for xml auto)
set @d = (select * from deleted for xml auto)
exec SaveAudit @d, @i
GO
October 13, 2008 at 9:32 am
jstuparitz (9/30/2008)
Just to provide a complete solution for everyone for George's suggestion see below. We've used this successfully and performance is excellent. For tables with binary/text columns you will need to change trigger a bit.Hope this helps. This will only work with Sql Server 2005 and above.
This is really great. Solves the problem of too much overhead with the OO method, but still allows for a complete ability to see changes to rows.
Thanks! :w00t:
November 12, 2008 at 2:20 pm
The issue I don't see anyone address is when you need to see what values were actually changed. The updated(), columns_updated() and sys.fn_IsBitSetInBitmask can all be used to get the fields changed but in each case you get all fields that are in the UPDATE statement.
So if you are just updating the same value (ex. firstname = Jason, updated statement updates same value from a textbox with Jason, value hasn't actually changed from application view) these functions are useless! The only way to solve this is to write complex triggers to compare the inserted and deleted values and then get the column that were changed, or you could deal with this from the application site but you will have to do a lot of processing and your stored procedures will have to be smart to only update particular fields that actually changed.
November 12, 2009 at 1:07 pm
jstuparitz (9/30/2008)
Just to provide a complete solution for everyone for George's suggestion see below. We've used this successfully and performance is excellent. For tables with binary/text columns you will need to change trigger a bit.Hope this helps. This will only work with Sql Server 2005 and above.
Thank you, thank you, thank you!
BTW what is [dbo].[AuditTable] for? Is not used anywhere.
For last in my case I got an error where the following SQL returned more than once the same column so I had to add the DISTINCT
SELECT @max-2=COUNT(DISTINCT c.[name])
FROM sys.all_columns c
INNER JOIN sys.all_objects o
ON c.object_id = o.object_id
INNER JOIN sys.types t
ON c.system_type_id = t.system_type_id
WHERE o.[name] = @TableName
AND o.[type] = 'U'
February 1, 2010 at 2:31 am
Swirl80 (5/12/2008)
Does anyone know if this has been fixed in SQL 2005 or do we have to wait until SQL 2008?
Check this article
hth /Peter
November 3, 2010 at 2:18 am
Sorry for the late bump.
We implemented a solution to address audit concerns by:
1. Creating an internal tool to define:
A. Schema of tables of audit interest and their related look up tables (to capture lookup values at trigger execution... As raw foreign key values can have their relevant values changed (lookup table's state Id = 1 state name could be changed from 'Arizona' to 'Texas'))
B. Provide 'Friendly' field names and relationships.
2. Create an end user tool which creates high performance, hard coded field name triggers which capture end user specified audit information.
3. End user tool to view and mark 'reviewed' status of information.
This strategy places the ball in the client's court to define relevant audit information. Generally, an intermediate developer can be tasked with updating the schema and then the client becomes wholly responsible for audit.
Any thoughts?
November 3, 2010 at 6:28 pm
geerobg (11/3/2010)
Sorry for the late bump ...This strategy places the ball in the client's court to define relevant audit information. Generally, an intermediate developer can be tasked with updating the schema and then the client becomes wholly responsible for audit.
Any thoughts?
Actually in this day of multi-terabyte disks I have given up entirely on incremental and/or partial audits entirely. I now have an approach that audits absolutely everything. It is illustrated in an article series that started a few weeks ago and is still running now. It's called "Building Nearly Codeless Apps" and you can find all the parts of it by searching on that string. Auditing is just one aspect of it.
May 19, 2011 at 10:45 pm
I've been reviewing a lot of these options, and it seems that generic triggers are mostly horrendous to read/understand/maintain and/or have performance problems. I've nearly finished developing a generic stored proceedure that can add a table specific trigger to each table, for all columns. This has the advantage that the code that generates the trigger is generic, but the trigger itself is specific, and hence is high performance. It can also be re-run periodically if the table schema changes. I will post it here when done.
However, I don't want to log updates (or deletes, but thats another topic) where they are caused by cascading from a parent table, and am having trouble identifying these. It's not a "recursive_trigger" issue as the update (or delete) occurs as a result of a cascade (recursive triggers are off anyway).
Any advise on how to exit a trigger if it is the result of a cascade would be much appreciated.
Warwick
October 21, 2012 at 10:11 am
Warwick Wood (5/19/2011)
I will post it here when done.
Just a followup. Were you ever able to complete this?
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 15 posts - 16 through 30 (of 34 total)
You must be logged in to reply to this topic. Login to reply