May 5, 2016 at 1:41 pm
Jeff Moden (4/28/2016)
Apologies. I'm not going to get to this tonight or tomorrow. I'm in the middle of a server migration and I need to sanitize the code before I post it here. We'll see what the weekend brings.For now, I can only comment to beware any trigger, generic or otherwise, that needs to make a copy of the two logical tables in the trigger.
Jeff...have you had a chance to do this?
thanks
________________________________________________________________
you can lead a user to data....but you cannot make them think
and remember....every day is a school day
May 5, 2016 at 1:51 pm
For my particular application, the audit is stored in its own database. I don't expect a lot of insert volume in my production DB, and both compress very well for backups: the production DB gets about 88% compression. The production DB is also very small: only about 8 MEG right now. Heck, it's possible for SQL to cache the whole thing! Of the 40 tables in the system, only six are audited as most of the rest are either lookups or tables that rows cannot be edited or deleted, so an EAV audit works, at least for now -- I haven't been able to get any users testing, so I have no real-world testing yet.
-----
[font="Arial"]Knowledge is of two kinds. We know a subject ourselves or we know where we can find information upon it. --Samuel Johnson[/font]
May 5, 2016 at 2:01 pm
J Livingston SQL (5/5/2016)
Jeff Moden (4/28/2016)
Apologies. I'm not going to get to this tonight or tomorrow. I'm in the middle of a server migration and I need to sanitize the code before I post it here. We'll see what the weekend brings.For now, I can only comment to beware any trigger, generic or otherwise, that needs to make a copy of the two logical tables in the trigger.
Jeff...have you had a chance to do this?
thanks
Thanks for the reminder but I've not forgotten. Just taking a short break from our migrations.
--Jeff Moden
Change is inevitable... Change for the better is not.
May 5, 2016 at 2:20 pm
Jeff Moden (5/5/2016)
J Livingston SQL (5/5/2016)
Jeff Moden (4/28/2016)
Apologies. I'm not going to get to this tonight or tomorrow. I'm in the middle of a server migration and I need to sanitize the code before I post it here. We'll see what the weekend brings.For now, I can only comment to beware any trigger, generic or otherwise, that needs to make a copy of the two logical tables in the trigger.
Jeff...have you had a chance to do this?
thanks
Thanks for the reminder but I've not forgotten. Just taking a short break from our migrations.
thanks Jeff....as and when...
your contributions always appreciated
________________________________________________________________
you can lead a user to data....but you cannot make them think
and remember....every day is a school day
May 5, 2016 at 8:34 pm
Wow. I wrote the code almost 3 years ago. I didn't remember the problems that we had with Linq2SQL and other things that had to be written into the code. There's a whole lot of stuff in it that might not be needed by most people.
So my question is, what's your druthers... the code as I currently have it in production or a simpler, more understandable version?
--Jeff Moden
Change is inevitable... Change for the better is not.
May 5, 2016 at 9:29 pm
Jez-448386 (5/5/2016)
Many of the generic triggers that I have seen, including Pop Rivett's, create an SQL statement for each column in the table so with a wide table of say 100 columns it would do 100 separate inserts into the audit table. This cannot be great for performance as an insert of 1 row into the table would fire 100 inserts into the audit table.Is it better to pivot the data and then perform a single insert into the audit table? An insert of 1 row into the table would fire 1 insert into the audit table of 100 rows.
Would pivoting the data for auditing work well for batch processing of 1 million rows?
Jez
Several points here.
1. The whole point of implementing such a model is to record not all values from columns but only those which have been changed.
I've seen too many systems where Application performs updates 1 value at a time:
UPDATE ShipToName = ... where OrderNumber = ...
UPDATE ShipToAddress1 = ... where OrderNumber = ...
UPDATE ShipToAddress2 = ... where OrderNumber = ...
UPDATE ShipToCity = ... where OrderNumber = ...
By some coincidence, people which develop such systems choose to have "single-column to fit all" audit tables.
in this case, it way faster to copy a single value to a single column than pivoting the whole 100 column row into 100 rows 99 of which contain useless data.
2. I have not seen the code of the triggers, but it may be done in form of a UNION ALL query performing a single INSERT.
There are some discussion points, it may be not the best way in every case, but it's doable.
I'd consider inserting into a table variable 1 column at a time, and then go with a single insert into an audit table.
3. UNPIVOT requires hardcoding the column names. So, it would be pretty much the same approach with dynamically generated triggers, using UNION ALL construction.
_____________
Code for TallyGenerator
May 6, 2016 at 1:25 am
Jeff Moden (5/5/2016)
Wow. I wrote the code almost 3 years ago. I didn't remember the problems that we had with Linq2SQL and other things that had to be written into the code. There's a whole lot of stuff in it that might not be needed by most people.So my question is, what's your druthers... the code as I currently have it in production or a simpler, more understandable version?
dont mind ...which ever you prefer.
thanks
________________________________________________________________
you can lead a user to data....but you cannot make them think
and remember....every day is a school day
May 6, 2016 at 3:32 am
Seriously, I see it was suggested before, did you have a look at CDC (Change Data Capture)? BOL: https://msdn.microsoft.com/en-us/library/cc645937.aspx. This technology provides -though only available in the enterprise and developer edition- out-of-the-box recording of any changes made, without the pain of impact on the user's transaction. In other words, you'll have a registration of any changes made (not just updates. Any changes: so inserts, deletes and updates) and you won't have to worry over the impact on the user's processes. The impact of the trigger solution is HUGE. And not only end user's performance experience is impacted but also multi user experience is seriously impacted. Read: your users will experience additional long waiting times for locks between processes seemingly not related and even deadlocks are going to be your fate. Auditing through triggers, and especially combined with writing into a generic key-name-value pairs table, is NOT the way to go for anything but very low volume tables.
As an additional argument against using such a generic key-name-value table solution, try thinking up a workable solution for disclosing the recorded audit information. Better yet, mock up some example table and generate a significant volume of test data into it, just to demo how disclosing the recorded audit data will perform and how (un-)usefull this data will be in that format... We've had it and moved away from it, as it's a nightmare.
Saving on diskspace is a terrible argument. You'll spend much more on development, testing and worst: you'll waste your end user's time and ultimately lose their confidence...
May 6, 2016 at 6:09 am
R.P.Rozema (5/6/2016)
Seriously, I see it was suggested before, did you have a look at CDC (Change Data Capture)? BOL: https://msdn.microsoft.com/en-us/library/cc645937.aspx. This technology provides -though only available in the enterprise and developer edition- out-of-the-box recording of any changes made, without the pain of impact on the user's transaction. In other words, you'll have a registration of any changes made (not just updates. Any changes: so inserts, deletes and updates) and you won't have to worry over the impact on the user's processes. The impact of the trigger solution is HUGE. And not only end user's performance experience is impacted but also multi user experience is seriously impacted. Read: your users will experience additional long waiting times for locks between processes seemingly not related and even deadlocks are going to be your fate. Auditing through triggers, and especially combined with writing into a generic key-name-value pairs table, is NOT the way to go for anything but very low volume tables.As an additional argument against using such a generic key-name-value table solution, try thinking up a workable solution for disclosing the recorded audit information. Better yet, mock up some example table and generate a significant volume of test data into it, just to demo how disclosing the recorded audit data will perform and how (un-)usefull this data will be in that format... We've had it and moved away from it, as it's a nightmare.
Saving on diskspace is a terrible argument. You'll spend much more on development, testing and worst: you'll waste your end user's time and ultimately lose their confidence...
We DID have the huge impact of triggers that you mention because of what I mentioned before. Our predecessors at my current company did use the "generic trigger" method, which always requires that a copy of the INSERTED and DELETED tables be made for comparison. Like I said, it was taking 4 minutes to update just 4 columns on just 10,000 rows on a 137 column table.
After the rewrite we did, we don't have that problem and the auditing time is nearly imperceptible. We don't suffer the delays or the customer complaints of slowness that you talked about and that we were actually experiencing, anymore.
We actually did look at CDC (disclaimer... we didn't set up a test) and decided to not go that route because it (according to BOL) "only" handles 1000 transactions every 5 seconds interpolating that to be equal to only 200 transactions per second. That meant that it would get a bit behind during the day and have to catch up at night... when all of the major batch processing was running and also requiring auditing because the batch processes hit the same tables. They also decided that they wanted to keep the EAV form of the audit table that they had (and we have to keep such data seemingly forever) so there would have been a bunch of post processing of the change data.
The triggers that we built did all that directly and nearly instantaneously.
Again and admittedly, we didn't actually setup CDC to determine if it was actually the right solution for us or not. We only went by what BOL said. Based on what you've stated, it may be worth a revisit but, to be sure, we're not having any performance problems with the trigger methods we built either for OLTP conditions or the massive batch processes that occur after hours.
--Jeff Moden
Change is inevitable... Change for the better is not.
May 6, 2016 at 6:52 am
R.P.Rozema (5/6/2016)
mock up some example table and generate a significant volume of test data into it
some test data......and yeah the generic audit is slow.
Personally I only use this for Updates and only on relatively small tables with few changes....tables for customers/products etc.
use [tempdb]
GO
IF OBJECT_ID('tempdb..TransData', 'U') IS NOT NULL
DROP TABLE tempdb..TransData;
IF OBJECT_ID('tempdb..Audit', 'U') IS NOT NULL
DROP TABLE tempdb..Audit;
/* create the audit table.
There will only need to be one of these in a database
will be updated from any table with the trigger below*/
CREATE TABLE Audit (
Type CHAR(1),
TableName VARCHAR(128),
PK VARCHAR(1000),
ColumnName VARCHAR(128),
OldValue sql_variant,
NewValue sql_variant,
UpdateDate DATETIME,
UserName VARCHAR(128),
AppName VARCHAR(128)
)
GO
/*create some test data*/
SELECT TOP 1000000
TranID = IDENTITY(INT, 1, 1),
CustomerID = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
Trans_type = CHAR(Abs(Checksum(Newid())) % 2 + 65)
+ CHAR(Abs(Checksum(Newid())) % 3 + 65)
+ CHAR(Abs(Checksum(Newid())) % 7 + 65),
ProdID = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol1 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol2 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol3 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol4 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol5 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol6 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol7 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol8 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol9 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol10 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol11 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol12 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol13 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol14 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol15 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol16 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol17 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol18 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol19 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol20 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol21 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol22 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol23 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol24 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol25 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol26 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol27 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol28 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol29 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol30 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol31 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol32 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol33 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol34 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol35 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol36 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol37 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol38 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol39 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol40 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol41 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol42 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol43 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol44 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol45 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol46 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol47 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol48 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol49 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol50 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol51 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol52 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol53 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol54 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol55 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol56 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol57 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol58 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol59 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol60 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol61 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol62 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol63 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol64 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol65 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol67 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol68 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol69 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol70 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol71 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol72 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol73 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol74 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol75 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol76 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol77 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol78 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol79 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol80 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol81 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol82 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol83 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol84 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol85 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol86 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol87 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol88 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol89 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol90 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol91 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol92 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol93 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol94 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol95 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol96 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol97 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol98 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol99 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol100 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol101 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol102 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol103 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol104 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol105 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol106 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol107 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol108 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol109 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol110 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol111 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol112 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol113 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol114 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol115 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol116 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol117 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol118 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol119 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
WideCol120 = CAST(Abs(Checksum(Newid()) % 90 + 1) AS INT),
Sales_Amount= CAST(Rand(Checksum(Newid())) * 99 + 1 AS DECIMAL(5, 2)),
Trans_Date = Dateadd(dd, Abs(Checksum(Newid()))%Datediff(dd, '2011', '2012'), '2011')
INTO TransData
FROM sys.all_columns ac1
CROSS JOIN sys.all_columns ac2
CROSS JOIN sys.all_columns ac3
/*note that for this audit trigger to work there must be a primary key on the table*/
ALTER TABLE [dbo].[TransData] ADD CONSTRAINT [PK_TransData] PRIMARY KEY CLUSTERED ([TranID] ASC)
GO
/*create the trigger . This has to be created on every table you want to monitor
*/
CREATE TRIGGER [dbo].[transdata_Audit]
ON [dbo].[TransData]
FOR
--INSERT, ---uncomment if required
--DELETE, ---uncomment if required
UPDATE
AS
DECLARE @bit INT
, @field INT
, @maxfield INT
, @char INT
, @ColumnName VARCHAR(128)
, @TableName VARCHAR(128)
, @PKCols VARCHAR(1000)
, @sql VARCHAR(2000)
, @UpdateDate VARCHAR(21)
, @UserName VARCHAR(128)
, @Type CHAR(1)
, @PKSelect VARCHAR(1000)
, @PKField VARCHAR(1000)
, @AppName VARCHAR(128)
/*IMPORTANT You will need to change @TableName to match the table to be audited*/
SELECT @TableName = 'transdata'
/* date - user - application*/
SELECT @UserName = SYSTEM_USER
, @AppName = APP_NAME()
, @UpdateDate = CONVERT(VARCHAR(8), GETDATE(), 112) + ' ' + CONVERT(VARCHAR(12), GETDATE(), 114)
/* Action*/
IF EXISTS ( SELECT *
FROM inserted )
IF EXISTS ( SELECT *
FROM deleted )
SELECT @Type = 'U'
ELSE
SELECT @Type = 'I'
ELSE
SELECT @Type = 'D'
/* get list of columns*/
SELECT *
INTO #ins
FROM inserted
SELECT *
INTO #del
FROM deleted
/* Get primary key columns for full outer join*/
SELECT @PKCols = COALESCE(@PKCols + ' and', ' on') + ' i.'
+ c.COLUMN_NAME + ' = d.' + c.COLUMN_NAME
FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS pk
, INFORMATION_SCHEMA.KEY_COLUMN_USAGE c
WHERE pk.TABLE_NAME = @TableName
AND CONSTRAINT_TYPE = 'PRIMARY KEY'
AND c.TABLE_NAME = pk.TABLE_NAME
AND c.CONSTRAINT_NAME = pk.CONSTRAINT_NAME
/* Get primary key columns ---jls*/
SELECT @PKField = COALESCE(@PKField + '+', '') + ''''
+ '''+convert(varchar(100), coalesce(i.' + COLUMN_NAME + ',d.' + COLUMN_NAME + '))+'''''
FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS pk
, INFORMATION_SCHEMA.KEY_COLUMN_USAGE c
WHERE pk.TABLE_NAME = @TableName
AND CONSTRAINT_TYPE = 'PRIMARY KEY'
AND c.TABLE_NAME = pk.TABLE_NAME
AND c.CONSTRAINT_NAME = pk.CONSTRAINT_NAME
IF @PKCols IS NULL
BEGIN
RAISERROR ( 'no PK on table %s', 16, - 1, @TableName )
RETURN
END
SELECT @field = 0
, @maxfield = MAX(ORDINAL_POSITION)
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = @TableName
WHILE @field < @maxfield
BEGIN
SELECT @field = MIN(ORDINAL_POSITION)
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = @TableName
AND ORDINAL_POSITION > @field
SELECT @bit = ( @field - 1 ) % 8 + 1
SELECT @bit = POWER(2, @bit - 1)
SELECT @char = ( ( @field - 1 ) / 8 ) + 1
IF SUBSTRING(COLUMNS_UPDATED(), @char, 1) & @bit > 0
OR @Type IN ( 'I', 'D' )
BEGIN
SELECT @ColumnName = COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = @TableName
AND ORDINAL_POSITION = @field
SELECT @sql = '
insert Audit (
Type,
TableName,
PK,
ColumnName,
OldValue,
NewValue,
UpdateDate,
UserName,
Appname)
select ''' + @Type + ''',''' + @TableName + ''',' + @PKField + ','''
+ @ColumnName + '''' + ',d.' + @ColumnName + ',i.' + @ColumnName + ','''
+ @UpdateDate + '''' + ','''
+ @UserName + ''',''' + @Appname + ''''
+ ' from #ins i full outer join #del d' + @PKCols
+ ' where i.' + @ColumnName + ' <> d.' + @ColumnName
+ ' or (i.' + @ColumnName + ' is null and d.'
+ @ColumnName + ' is not null)' + ' or (i.'
+ @ColumnName + ' is not null and d.' + @ColumnName
+ ' is null)'
EXEC ( @sql )
END
END
/*trigger end*/
GO
/* do some updates*/
/*create temptable to store start and end times*/
IF OBJECT_ID('tempdb..#Results', 'U') IS NOT NULL
DROP TABLE tempdb..#Results;
CREATE TABLE #Results (
Comment VARCHAR(20)
, StartTime DATETIME
, EndTime DATETIME
, Duration int
)
GO
-- update 6 columns on 100 000 rows
--run 10 iterations of update with trigger on
--then disable trigger and repeat
ENABLE TRIGGER transdata_Audit ON TransData
GO
SET NOCOUNT ON
DECLARE @StartTime DATETIME = getdate()
DECLARE @EndTime DATETIME
UPDATE TransData
SET Trans_type = CHAR(Abs(Checksum(Newid())) % 2 + 65) + CHAR(Abs(Checksum(Newid())) % 3 + 65
) + CHAR(Abs(Checksum(Newid())) % 7 + 65)
, WideCol118 = WideCol118 + 20
, WideCol45 = WideCol45 - 20
, WideCol13 = WideCol13 + 50
, WideCol18 = WideCol18 + 100
, WideCol49 = WideCol49 - 100
WHERE (TranID % 10 = 0)
SELECT @EndTime = getdate()
INSERT #Results
SELECT'with_Audit', @StartTime, @EndTime ,datediff(MILLISECOND, @StartTime, @EndTime)
GO 10
DISABLE TRIGGER transdata_Audit ON TransData
GO
DECLARE @StartTime datetime = getdate()
DECLARE @EndTime datetime
UPDATE TransData
SET Trans_type = CHAR(Abs(Checksum(Newid())) % 2 + 65) + CHAR(Abs(Checksum(Newid())) % 3 + 65
) + CHAR(Abs(Checksum(Newid())) % 7 + 65)
, WideCol118 = WideCol118 + 20
, WideCol45 = WideCol45 - 20
, WideCol13 = WideCol13 + 50
, WideCol18 = WideCol18 + 100
, WideCol49 = WideCol49 - 100
WHERE (TranID % 10 = 0)
SELECT @EndTime = getdate()
INSERT #Results
SELECT'without_Audit', @StartTime, @EndTime ,datediff(MILLISECOND, @StartTime, @EndTime)
GO 10
SET NOCOUNT OFF
/*get average results */
SELECT comment
, AVG( duration) AS av_dur_ms
FROM #Results
GROUP BY comment
--SELECT *
--FROM #Results
--SELECT COUNT(*)
--FROM audit;
--TRUNCATE TABLE audit;
________________________________________________________________
you can lead a user to data....but you cannot make them think
and remember....every day is a school day
May 6, 2016 at 8:46 am
Jeff Moden (5/5/2016)
Wayne West (5/5/2016)
Good points. My biggest table has 66 columns (all the other active tables have a very small column count), but even when I was inserting 600 records through a paste operation, generating 40,000 audit records, I didn't notice a perceptible delay.I've gone back and forth on the concept of auditing inserts. Obviously if you have updates audited then the first change to a record will show the changed field's original value, so it's easy to argue that it's not needed. And I have a DateAdded field on my core records, so I know that. I do not know, at the core record level, who inserted the row. (Celko would be all over my butt for using rows and records like this!) Overall, I'm in favor of auditing the insert, but I'm sure I'd change my tune in a system where you're inserting a bajillion new rows a day.
Remember that auditing inserts for WHOLE ROW auditing instantly doubles the disk storage, backup, and restore requirements across all fronts unless the audit table is stored in a separate database. If it's BY COLUMN auditing, then you instantly increase all those same requirements by a factor of 6 or 8 because of the EAV nature of BY COLUMN audit tables. Either way, such tables frequently become larger than all the rest of the tables combined.
Jeff, are you saying that to audit individual columns will actually take up more space than auditing the whole row? If so, how is that possible? This is important because I know my boss wants to do individual columns; I'm just not sure yet how he wants to do that.
Kindest Regards, Rod Connect with me on LinkedIn.
May 6, 2016 at 9:40 am
Rod at work (5/6/2016)
Jeff Moden (5/5/2016)
Wayne West (5/5/2016)
Good points. My biggest table has 66 columns (all the other active tables have a very small column count), but even when I was inserting 600 records through a paste operation, generating 40,000 audit records, I didn't notice a perceptible delay.I've gone back and forth on the concept of auditing inserts. Obviously if you have updates audited then the first change to a record will show the changed field's original value, so it's easy to argue that it's not needed. And I have a DateAdded field on my core records, so I know that. I do not know, at the core record level, who inserted the row. (Celko would be all over my butt for using rows and records like this!) Overall, I'm in favor of auditing the insert, but I'm sure I'd change my tune in a system where you're inserting a bajillion new rows a day.
Remember that auditing inserts for WHOLE ROW auditing instantly doubles the disk storage, backup, and restore requirements across all fronts unless the audit table is stored in a separate database. If it's BY COLUMN auditing, then you instantly increase all those same requirements by a factor of 6 or 8 because of the EAV nature of BY COLUMN audit tables. Either way, such tables frequently become larger than all the rest of the tables combined.
Jeff, are you saying that to audit individual columns will actually take up more space than auditing the whole row? If so, how is that possible? This is important because I know my boss wants to do individual columns; I'm just not sure yet how he wants to do that.
No. Not for wide tables like what I have. It makes no sense to do whole row auditing on a 137 column wide table where only 4 columns might be updated on a regular basis.
Conversely, it makes no sense to do column based auditing on a 10 column table. Whole row would take less room because then you wouldn't have to preserve the audit ID, table name, column name, PK value, who made the change, what kind of action (Insert, Update, Delete) caused the audit, and when for every "cell" that was updated. For 4 columns being updated, on a 10 column table would cause an explosion of data compared to just auditing the whole 10 columns as a single row.
--Jeff Moden
Change is inevitable... Change for the better is not.
May 6, 2016 at 9:42 am
Shifting gears though and no matter the method of auditing, auditing INSERTs is a huge waste of time, effort, and disk space. Audits should ONLY record changes to the original data and DELETEs (if you actually allow DELETEs).
--Jeff Moden
Change is inevitable... Change for the better is not.
May 8, 2016 at 7:27 am
Rod did you run the test script I posted?
1 Million row table
-- update 6 columns on 100 000 rows
--run 10 iterations of update with trigger on
--then disable trigger and repeat
________________________________________________________________
you can lead a user to data....but you cannot make them think
and remember....every day is a school day
Viewing 15 posts - 31 through 45 (of 59 total)
You must be logged in to reply to this topic. Login to reply