August 24, 2011 at 10:00 am
david.holley (8/24/2011)
ajsnyman (8/24/2011)
david.holley (8/24/2011)
...and you're thinking that the table that I posted was copied exactly from the DB? I fully understand. I was curious to see how you'd pull something like that off. It did occur to me that due to some pre-existing triggers that it'd have to be done manually anyways. Quite fortunately its going to be implemented piece meal so adding them individually won't be that big of an issue. The tables will also be similarly named as in ProjectHeaders and ProjectHeaders_Audit so that hopefully any future DB that makes a change to one will realize that it also has to be made to the child table. Of course, that now has me wondering if you couldn't create a stored procedure through which table changes are made which would allow both tables to always be kept in sync in terms of design.I could see it's an example :hehe:
Ye, it's a lot of work at first, but we can adapt it for any future project.
To replicate the structure would be cool, but scary, every time tables just change automatically your .net project suddenly stop working, it'll be DBA vs Developer :w00t:
I am one and the same so as long as the voices in my head are preoccupied with which cigar I'm going to have tonight, I should be fine. Plus, most (if not all) of my data access is through views and stored procedures to begin with.
My thought was to create a stored procedure that handles changing the structure of the table so that it can change both at the same time encapsulating the neccessary commands. I used the basic concept on an Access project a year ago where pass-through-queries were executed via a SUB that created the pass-through if it didn't already exist.
Hmmm, that would be very clever 🙂
Just a thought, based on my assumption, if you're using the s-procs to do your inserts and updates, I would rather use those to handle the auditing.
But, these days LINQ-SQL is awesome, and that's how I would handle DB related calls 😎
August 24, 2011 at 10:07 am
It just occurred to me that I'll have to use the stored proc to capture the before and after records and not use triggers at all. Since I have a family of sp's, I could just capture the relevant columns leaving the others empty. Each sp only updates one specific subset of the data, either in the parent table or a child record.
sp_ProjectSetup_Resources
sp_ProjectSetup_GeneralInformation
sp_ProjectSetup_Dates
I'm also seriously entertaining the idea of having the sp create a SELECT statement that only returns the columns that were modified and then saving that in the ChangeLogHeaders table to make retrieving the information easier for the ASP.NET control. Since we're dealing with a wide variety of tables and columns, it'll eliminate the needed to building the SELECT statement later.
August 24, 2011 at 2:50 pm
Hi David
Code posted below may provide you with some ideas and I would be very happy to hear back from you and others once reviewed.
It uses an AUDIT table that is common for all tables in the database...and this can then be enquired upon for further queries.
Each table that requires auditing has a trigger and changes to the table are posted to the AUDIT table.
For my purposes...which are only to record updates (Not inserts or deletes) this works well in my environment and performs acceptably when using it on ETL process where I use merge from staging tables.
Please note that if you use this for inserts/deletes that a row is created for each column ...you may wish to amend the code accordingly.
As noted in the code...the trigger on the table to update the audit table is not my own.
Hopefully I have included enough (brief) comments in the code for you to work out what is going on.
Kind regards.
--=====
USE TempDB
;
--===== conditionally drop the test tables
IF OBJECT_ID('TempDB.dbo.AUDIT','U') IS NOT NULL
DROP TABLE TempDB.dbo.AUDIT
IF OBJECT_ID('TempDB.dbo.PRODUCTS','U') IS NOT NULL
DROP TABLE TempDB.dbo.PRODUCTS
IF OBJECT_ID('TempDB.dbo.CUSTOMERS','U') IS NOT NULL
DROP TABLE TempDB.dbo.CUSTOMERS
---========= AUDIT table will only be updated by trigger on a table
---========= therefore to audit a table you need to create the trigger and alter trigger code accordingly as noted later.
CREATE TABLE [dbo].[AUDIT]
(
[Type] [NVARCHAR](1) NULL,
[TableName] [NVARCHAR](128) NULL,
[PK] [INT] NULL,
[FieldName] [NVARCHAR](128) NULL,
[OldValue] [NVARCHAR](255) NULL,
[NewValue] [NVARCHAR](255) NULL,
[UpdateDate] [SMALLDATETIME] NULL,
[UserName] [NVARCHAR](128) NULL,
[ID] [INT] IDENTITY(1, 1) NOT NULL,
PRIMARY KEY CLUSTERED ( [ID] ASC )
)
ON [PRIMARY]
CREATE TABLE [dbo].[Products]
(
[Product_ID] [INT] NOT NULL,
[Product_Name] [NVARCHAR](10) NULL,
[Product_Range] [NVARCHAR](2) NULL,
[Product_GroupID] [INT] NULL,
[Product_ManagerId] [INT] NULL,
PRIMARY KEY CLUSTERED ( [Product_ID] ASC )
)
ON [PRIMARY]
INSERT INTO [dbo].[Products]([Product_ID], [Product_Name], [Product_Range], [Product_GroupID], [Product_ManagerId])
SELECT 1, N'Widget', N'A', 1, 9 UNION ALL
SELECT 2, N'Super Widg', N'A', 1, 9 UNION ALL
SELECT 3, N'Wongle', N'C', 2, 5 UNION ALL
SELECT 4, N'Woofle Dus', N'B', 6, 2
CREATE TABLE [dbo].[Customers]
(
[Customer_ID] [INT] NOT NULL,
[Customer_LastName] [NVARCHAR](50) NULL,
[Customer_FirstName] [NVARCHAR](50) NULL,
[Customer_SalesArea] [NVARCHAR](3) NULL,
[Customer_CreditLimit] [DECIMAL](18, 0) NULL,
PRIMARY KEY CLUSTERED ( [Customer_ID] ASC )
)
ON [PRIMARY]
INSERT INTO [dbo].[Customers]([Customer_ID], [Customer_LastName], [Customer_FirstName], [Customer_SalesArea], [Customer_CreditLimit])
SELECT 1, N'Smith', N'John', N'A', 100000 UNION ALL
SELECT 2, N'Wilson', N'Paul', N'A', 50000 UNION ALL
SELECT 3, N'Rogers', N'Andrew', N'B', 70000 UNION ALL
SELECT 4, N'Jones', N'Brian', N'C', 30000
GO
---====== Create Triggers for each table that requires audit.
---====== NOTE TRIGGERS BUILT FOR UPDATES ONLY...IF SET FOR INSERT_DELETE THEN A ROW WILL BE INSERTED FOR EACH COLUMN ON EACH ROW
---====== Original Trigger script below was discovered on the internet...and slightly amnended ..unfortunately I no longer have the links
---====== therefore please accept my apologies if you are the original author...please contact me via http://www.SQLSERVERCENTRAL.COM
---====== and I will be happy to update with correct acknowledgements.
CREATE TRIGGER [dbo].[PRODUCTS_Audit]
ON [dbo].[PRODUCTS]
FOR
---INSERT,
---DELETE,
UPDATE
AS
DECLARE @bit INT
, @field INT
, @maxfield INT
, @char INT
, @fieldname VARCHAR(128)
, @TableName VARCHAR(128)
, @PKCols VARCHAR(1000)
, @sql VARCHAR(2000)
, @UpdateDate VARCHAR(21)
, @UserName VARCHAR(128)
, @Type CHAR(1)
, @PKSelect VARCHAR(1000)
, @PKField VARCHAR(1000)
----=========You will need to change @TableName to match the table to be audited=========--------
SELECT @TableName = 'PRODUCTS'
-- date and user
SELECT @UserName = SYSTEM_USER
, @UpdateDate = CONVERT(VARCHAR(8), GETDATE(), 112) + ' ' + CONVERT(VARCHAR(12), GETDATE(), 114)
-- Action
IF EXISTS ( SELECT *
FROM inserted )
IF EXISTS ( SELECT *
FROM deleted )
SELECT @Type = 'U'
ELSE
SELECT @Type = 'I'
ELSE
SELECT @Type = 'D'
-- get list of columns
SELECT *
INTO #ins
FROM inserted
SELECT *
INTO #del
FROM deleted
-- Get primary key columns for full outer join
SELECT @PKCols = COALESCE(@PKCols + ' and', ' on') + ' i.'
+ c.COLUMN_NAME + ' = d.' + c.COLUMN_NAME
FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS pk
, INFORMATION_SCHEMA.KEY_COLUMN_USAGE c
WHERE pk.TABLE_NAME = @TableName
AND CONSTRAINT_TYPE = 'PRIMARY KEY'
AND c.TABLE_NAME = pk.TABLE_NAME
AND c.CONSTRAINT_NAME = pk.CONSTRAINT_NAME
-- Get primary key select for insert
SELECT @PKSelect = COALESCE(@PKSelect + '+', '') + '''<' + COLUMN_NAME
+ '=''+convert(varchar(100),
coalesce(i.' + COLUMN_NAME + ',d.' + COLUMN_NAME + '))+''>'''
FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS pk
, INFORMATION_SCHEMA.KEY_COLUMN_USAGE c
WHERE pk.TABLE_NAME = @TableName
AND CONSTRAINT_TYPE = 'PRIMARY KEY'
AND c.TABLE_NAME = pk.TABLE_NAME
AND c.CONSTRAINT_NAME = pk.CONSTRAINT_NAME
-- Get primary key columns
SELECT @PKField = COALESCE(@PKField + '+', '') + ''''
+ '''+convert(varchar(100),
coalesce(i.' + COLUMN_NAME + ',d.' + COLUMN_NAME + '))+'''''
FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS pk
, INFORMATION_SCHEMA.KEY_COLUMN_USAGE c
WHERE pk.TABLE_NAME = @TableName
AND CONSTRAINT_TYPE = 'PRIMARY KEY'
AND c.TABLE_NAME = pk.TABLE_NAME
AND c.CONSTRAINT_NAME = pk.CONSTRAINT_NAME
IF @PKCols IS NULL
BEGIN
RAISERROR ( 'no PK on table %s', 16, - 1, @TableName )
RETURN
END
SELECT @field = 0
, @maxfield = MAX(ORDINAL_POSITION)
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = @TableName
WHILE @field < @maxfield
BEGIN
SELECT @field = MIN(ORDINAL_POSITION)
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = @TableName
AND ORDINAL_POSITION > @field
SELECT @bit = ( @field - 1 ) % 8 + 1
SELECT @bit = POWER(2, @bit - 1)
SELECT @char = ( ( @field - 1 ) / 8 ) + 1
IF SUBSTRING(COLUMNS_UPDATED(), @char, 1) & @bit > 0
OR @Type IN ( 'I', 'D' )
BEGIN
SELECT @fieldname = COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = @TableName
AND ORDINAL_POSITION = @field
SELECT @sql = '
insert Audit (Type,
TableName,
PK,
FieldName,
OldValue,
NewValue,
UpdateDate,
UserName)
select ''' + @Type + ''',''' + @TableName + ''',' + @PKField + ','''
+ @fieldname + '''' + ',convert(varchar(1000),d.'
+ @fieldname + ')' + ',convert(varchar(1000),i.'
+ @fieldname + ')' + ',''' + @UpdateDate + ''''
+ ',''' + @UserName + ''''
+ ' from #ins i full outer join #del d' + @PKCols
+ ' where i.' + @fieldname + ' <> d.' + @fieldname
+ ' or (i.' + @fieldname + ' is null and d.'
+ @fieldname + ' is not null)' + ' or (i.'
+ @fieldname + ' is not null and d.' + @fieldname
+ ' is null)'
EXEC (
)
END
END
GO
CREATE TRIGGER [dbo].[CUSTOMERS_Audit]
ON [dbo].[CUSTOMERS]
FOR
---INSERT,
---DELETE,
UPDATE
AS
DECLARE @bit INT
, @field INT
, @maxfield INT
, @char INT
, @fieldname VARCHAR(128)
, @TableName VARCHAR(128)
, @PKCols VARCHAR(1000)
, @sql VARCHAR(2000)
, @UpdateDate VARCHAR(21)
, @UserName VARCHAR(128)
, @Type CHAR(1)
, @PKSelect VARCHAR(1000)
, @PKField VARCHAR(1000)
----=========You will need to change @TableName to match the table to be audited=========--------
SELECT @TableName = 'CUSTOMERS'
-- date and user
SELECT @UserName = SYSTEM_USER
, @UpdateDate = CONVERT(VARCHAR(8), GETDATE(), 112) + ' ' + CONVERT(VARCHAR(12), GETDATE(), 114)
-- Action
IF EXISTS ( SELECT *
FROM inserted )
IF EXISTS ( SELECT *
FROM deleted )
SELECT @Type = 'U'
ELSE
SELECT @Type = 'I'
ELSE
SELECT @Type = 'D'
-- get list of columns
SELECT *
INTO #ins
FROM inserted
SELECT *
INTO #del
FROM deleted
-- Get primary key columns for full outer join
SELECT @PKCols = COALESCE(@PKCols + ' and', ' on') + ' i.'
+ c.COLUMN_NAME + ' = d.' + c.COLUMN_NAME
FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS pk
, INFORMATION_SCHEMA.KEY_COLUMN_USAGE c
WHERE pk.TABLE_NAME = @TableName
AND CONSTRAINT_TYPE = 'PRIMARY KEY'
AND c.TABLE_NAME = pk.TABLE_NAME
AND c.CONSTRAINT_NAME = pk.CONSTRAINT_NAME
-- Get primary key select for insert
SELECT @PKSelect = COALESCE(@PKSelect + '+', '') + '''<' + COLUMN_NAME
+ '=''+convert(varchar(100),
coalesce(i.' + COLUMN_NAME + ',d.' + COLUMN_NAME + '))+''>'''
FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS pk
, INFORMATION_SCHEMA.KEY_COLUMN_USAGE c
WHERE pk.TABLE_NAME = @TableName
AND CONSTRAINT_TYPE = 'PRIMARY KEY'
AND c.TABLE_NAME = pk.TABLE_NAME
AND c.CONSTRAINT_NAME = pk.CONSTRAINT_NAME
-- Get primary key columns ---gah
SELECT @PKField = COALESCE(@PKField + '+', '') + ''''
+ '''+convert(varchar(100),
coalesce(i.' + COLUMN_NAME + ',d.' + COLUMN_NAME + '))+'''''
FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS pk
, INFORMATION_SCHEMA.KEY_COLUMN_USAGE c
WHERE pk.TABLE_NAME = @TableName
AND CONSTRAINT_TYPE = 'PRIMARY KEY'
AND c.TABLE_NAME = pk.TABLE_NAME
AND c.CONSTRAINT_NAME = pk.CONSTRAINT_NAME
IF @PKCols IS NULL
BEGIN
RAISERROR ( 'no PK on table %s', 16, - 1, @TableName )
RETURN
END
SELECT @field = 0
, @maxfield = MAX(ORDINAL_POSITION)
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = @TableName
WHILE @field < @maxfield
BEGIN
SELECT @field = MIN(ORDINAL_POSITION)
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = @TableName
AND ORDINAL_POSITION > @field
SELECT @bit = ( @field - 1 ) % 8 + 1
SELECT @bit = POWER(2, @bit - 1)
SELECT @char = ( ( @field - 1 ) / 8 ) + 1
IF SUBSTRING(COLUMNS_UPDATED(), @char, 1) & @bit > 0
OR @Type IN ( 'I', 'D' )
BEGIN
SELECT @fieldname = COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = @TableName
AND ORDINAL_POSITION = @field
SELECT @sql = '
insert Audit (Type,
TableName,
PK,
FieldName,
OldValue,
NewValue,
UpdateDate,
UserName)
select ''' + @Type + ''',''' + @TableName + ''',' + @PKField + ','''
+ @fieldname + '''' + ',convert(varchar(1000),d.'
+ @fieldname + ')' + ',convert(varchar(1000),i.'
+ @fieldname + ')' + ',''' + @UpdateDate + ''''
+ ',''' + @UserName + ''''
+ ' from #ins i full outer join #del d' + @PKCols
+ ' where i.' + @fieldname + ' <> d.' + @fieldname
+ ' or (i.' + @fieldname + ' is null and d.'
+ @fieldname + ' is not null)' + ' or (i.'
+ @fieldname + ' is not null and d.' + @fieldname
+ ' is null)'
EXEC (
)
END
END
GO
---==== DISPLAY ORIGINAL TABLE DATA
SELECT * FROM dbo.Customers
SELECT * FROM dbo.Products
---- now update some records
UPDATE Customers
SET Customer_LastName = 'Baggins'
WHERE (Customer_ID = 1)
UPDATE Customers
SET Customer_SalesArea = 'D'
WHERE (Customer_ID = 2)
UPDATE Customers
SET Customer_CreditLimit = 0
WHERE (Customer_ID = 2)
UPDATE Products
SET Product_Name = 'Widgets'
WHERE (Product_ID = 1)
UPDATE Products
SET Product_GroupID = 7
WHERE (Product_ID = 3)
UPDATE Products
SET Product_ManagerID = 1
WHERE (Product_ID = 3)
---===== DISPLAY AMENDED TABLE DATA
SELECT * FROM dbo.Customers
SELECT * FROM dbo.Products
----==== DISPLAY AUDIT DATA
SELECT * FROM AUDIT
---====== NOW ALTER CUSTOMER TABLE AND ADD DETAIL
ALTER TABLE dbo.Customers ADD
Customer_SalespersonID int NULL
GO
UPDATE Customers
SET Customer_SalespersonID = 101
WHERE (Customer_ID = 2)
----==== DISPLAY AUDIT DATA
SELECT * FROM AUDIT
________________________________________________________________
you can lead a user to data....but you cannot make them think
and remember....every day is a school day
August 25, 2011 at 6:30 am
Thanks yall for the input here the general direction that I'm going. There are still some things to hammer and fine tune such as denormalizing the _Audit tables. (The fact that the OfficeId was changed from 1110 to 1120 doesn't go a long way to help the user see that the Office was changed from Boston to Miami.)
--SelectStatement stores a SQL statement that retrieves only the changed values to help with comparing old/new
--values, the WHERE statement had to be omitted since I'm using Scope_Identity() as the batch number. When the
--record is retreived the WHERE statement will be automatically created on the ASP.NET side of things
--Id is used in the various _Audit tables to link specific records to the parent record in this table
--The object information is provided for research/tracking purposes if an issue arises
--Element is a conceptual element of the data that was changed such as 'General Information', 'Documentation', 'Staffing', 'Schedule', etc.
CREATE TABLE [dbo].[ProjectInformationChangeLog](
[Id] [int] IDENTITY(1000,1) NOT NULL,
[Action] [varchar](10) NOT NULL,
[Element] [varchar](25) NULL,
[WindowsUserId] [varchar](28) NULL,
[DateTimeStamp] [datetime] NULL,
[ASPObject] [varchar](128) NULL,
[SQLServerTable] [varchar](75) NULL,
[SQLServerStoredProcedure] [varchar](128) NULL,
[Comment] [varchar](255) NULL,
[SelectStatement] [varchar](500) NULL,
[ProjectNumber] [varchar](8) NOT NULL,
CONSTRAINT [PK_ChangeLogHeaders] PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
CREATE TABLE [dbo].[StandardProjectDocuments](
[DocumentShortDescription] [varchar](10) NOT NULL,
[Quantity] [tinyint] NOT NULL,
[ProjectNumber] [varchar](8) NOT NULL,
CONSTRAINT [PK_StandardProjectDocuments] PRIMARY KEY CLUSTERED
(
[DocumentShortDescription] ASC,
[ProjectNumber] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
--ValueType is simple 'Old' or 'New'
--Batch number allows the related records to be retrieved and associated with the record in
--ProjectInformationChangeLog so that the front end will display a list of the changes, a user will then be able to click
--on a link to display the specifics for the changes that are of interest to them
CREATE TABLE [dbo].[StandardProjectDocuments_Audit](
[Id] [int] IDENTITY(1000,1) NOT NULL,
[ValueType] [char](3) NOT NULL,
[BatchNumber] [int] NOT NULL,
[DocumentShortDescription] [varchar](10) NOT NULL,
[Quantity] [tinyint] NOT NULL,
[ProjectNumber] [varchar](8) NOT NULL,
CONSTRAINT [PK_StandardProjectDocuments_Audit] PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
--Handles All Inserts & Updates, DB is backend for an ASP.NET app, several sp's exist to update the various elements
--The INSERT that creates the ChangeLogRecord will be spun out to an external sp
--The idea of using triggers was nixed since everything is funneled through a sp. Deletes are handled by a separate SP (sp_ProjectSetup_StandardProjectDocument_Delete).
--Old / New values are captured by running an INSERT on the _Audit table that queries the record before and after the INSERT, UPDATE, DELETE
CREATE PROCEDURE [dbo].[sp_ProjectSetup_StandardProjectDocument]
(
@ProjectNumber varchar(8),
@DocumentShortDescription varchar(10),
@Quantity tinyint
)
AS
DECLARE @RecordCount tinyint
DECLARE @BatchNumber integer
BEGIN
SELECT @RecordCount = COUNT(DocumentShortDescription) FROM StandardProjectDocuments
WHERE ProjectNumber = @ProjectNumber and DocumentShortDescription = @DocumentShortDescription
IF @RecordCount = 0
BEGIN
--Create the change header
INSERT INTO
ProjectInformationChangeLog
(ProjectNumber, Action, Element, WindowsUserId, DateTimeStamp, ASPObject, SQLServerTable, SQLServerStoredProcedure, SelectStatement)
VALUES (@ProjectNumber, 'Insert', 'Standard Project Document', System_user, SYSDATETIME(), '', 'StandardProjectDocuments', 'sp_ProjectSetup_StandardProjectDocument', 'SELECT ProjectNumber, DocumentShortDescription, Quantity FROM StandardProjectDocuments_Audit')
SET @BatchNumber = SCOPE_IDENTITY()
INSERT INTO StandardProjectDocuments (ProjectNumber, DocumentShortDescription, Quantity)
VALUES (@ProjectNumber, @DocumentShortDescription, @Quantity)
INSERT INTO StandardProjectDocuments_Audit (ValueType,BatchNumber, ProjectNumber, DocumentShortDescription, Quantity)
VALUES ('New', @BatchNumber, @ProjectNumber, @DocumentShortDescription, @Quantity)
END
If @RecordCount >0
BEGIN
--Create the change header
INSERT INTO
ProjectInformationChangeLog
(ProjectNumber, Action, Element, WindowsUserId, DateTimeStamp, ASPObject, SQLServerTable, SQLServerStoredProcedure, SelectStatement)
VALUES (@ProjectNumber, 'Update', 'Standard Project Document', System_user, SYSDATETIME(), '', 'StandardProjectDocuments', 'sp_ProjectSetup_StandardProjectDocument', 'SELECT ProjectNumber, DocumentShortDescription, Quantity FROM StandardProjectDocuments_Audit')
SET @BatchNumber = SCOPE_IDENTITY()
INSERT INTO StandardProjectDocuments_Audit (ValueType,BatchNumber, ProjectNumber, DocumentShortDescription, Quantity)
SELECT 'Old', @BatchNumber, ProjectNumber, DocumentShortDescription, Quantity FROM StandardProjectDocuments
WHERE ProjectNumber = @ProjectNumber and DocumentShortDescription = @DocumentShortDescription
UPDATE StandardProjectDocuments SET Quantity = @Quantity
WHERE ProjectNumber = @ProjectNumber and DocumentShortDescription = @DocumentShortDescription
INSERT INTO StandardProjectDocuments_Audit (ValueType,BatchNumber, ProjectNumber, DocumentShortDescription, Quantity)
SELECT 'New', @BatchNumber, @ProjectNumber, @DocumentShortDescription, @Quantity FROM StandardProjectDocuments
WHERE ProjectNumber = @ProjectNumber and DocumentShortDescription = @DocumentShortDescription
END
RETURN @@Error
END
August 25, 2011 at 7:33 am
The one thing you seriously need to consider is how you are going to pull data to give a snapshot view at a point in time. Take a given record in your base table and make a bunch of changes so your audit table has a bunch of stuff and then figure out how to rebuild what that record looked on a specific date. This is where the catch all audit table is total pain in the a$$!!! You have to query your audit table for each and every column that is audited to build the history at a point in time. I am by no means telling you not to use this approach but want to make you sure you consider this before you go very far down this path. Also if you want to be able to "browse" the history of a entity it can be tough to get the data correct. consider the example above and put together the query to show the history for the last 6 months. Best of luck and thanks for sharing your solution.
--edit spelling error 🙂
_______________________________________________________________
Need help? Help us help you.
Read the article at http://www.sqlservercentral.com/articles/Best+Practices/61537/ for best practices on asking questions.
Need to split a string? Try Jeff Modens splitter http://www.sqlservercentral.com/articles/Tally+Table/72993/.
Cross Tabs and Pivots, Part 1 – Converting Rows to Columns - http://www.sqlservercentral.com/articles/T-SQL/63681/
Cross Tabs and Pivots, Part 2 - Dynamic Cross Tabs - http://www.sqlservercentral.com/articles/Crosstab/65048/
Understanding and Using APPLY (Part 1) - http://www.sqlservercentral.com/articles/APPLY/69953/
Understanding and Using APPLY (Part 2) - http://www.sqlservercentral.com/articles/APPLY/69954/
August 25, 2011 at 8:23 am
Sean Lange (8/25/2011)
The one thing you seriously need to consider is how you are going to pull data to give a snapshot view at a point in time. Take a given record in your base table and make a bunch of changes so your audit table has a bunch of stuff and then figure out how to rebuild what that record looked on a specific date. This is where the catch all audit table is total pain in the a$$!!! You have to query your audit table for each and every column that is audited to build the history at a point in time. I am by no means telling you not to use this approach but want to make you sure you consider this before you go very far down this path. Also if you want to be able to "browse" the history of a entity it can be tough to get the data correct. consider the example above and put together the query to show the history for the last 6 months. Best of luck and thanks for sharing your solution.--edit spelling error 🙂
We don't actually need to see a snapshot of what the record was at any particular point of time. The nature of the information is such that we're only concerned about the current information plus any actual changes - who changed what and when. Rebuilding the record isn't something that I see as a future need. Even then, its the recent changes that matter. Changes made in the last couple of days are more important than those made six months ago, unless there are security concerns.
While the information is critical to the operation, it isn't sensitive such as property tax records.
I do appreciate the input though. I'm a HUGE believer in designing things in such a way that they are flexible and allow for future development with a minimum of effort. <soapbox> I believe that the truest test of any design is the degree to which it can be used ways that the original designers never envisioned with little or no modification. </soapbox>
August 25, 2011 at 8:33 am
Sounds like you are on the right path for your needs then. I just pointed it out to make sure it was something you had considered. The catch all audit table will eat up a lot less disc space to say the least.
_______________________________________________________________
Need help? Help us help you.
Read the article at http://www.sqlservercentral.com/articles/Best+Practices/61537/ for best practices on asking questions.
Need to split a string? Try Jeff Modens splitter http://www.sqlservercentral.com/articles/Tally+Table/72993/.
Cross Tabs and Pivots, Part 1 – Converting Rows to Columns - http://www.sqlservercentral.com/articles/T-SQL/63681/
Cross Tabs and Pivots, Part 2 - Dynamic Cross Tabs - http://www.sqlservercentral.com/articles/Crosstab/65048/
Understanding and Using APPLY (Part 1) - http://www.sqlservercentral.com/articles/APPLY/69953/
Understanding and Using APPLY (Part 2) - http://www.sqlservercentral.com/articles/APPLY/69954/
August 25, 2011 at 11:50 am
Since the data is essential dead end and won't have any relationships to external tables, would you recommend going with varchar for the datatypes? Unicode characters won't be applicable. The parent tables do have varies columns that are tinyint, smallint, int, datetime, etc. and do have relationships to other tables, however the Audit tables will eventually be based on views to capture the values in the external tables (knowing that 1110 was changed to 1120 is pointless unless you know that 1110 is the Boston office).
August 25, 2011 at 12:39 pm
I think varchar would be fine with one possible exception. DateTime. If you are only going to use this for reporting it might be ok as a varchar but i cringe at how painful varchar data representing datetime can be. In reference to your earlier post you may need to consider usage beyond the original intent. It may be ok to have a varchar field to hold the "data", especially if you have a datetime column to hold the datetime for when the change was made. If you decided an varchar you will be able to easily use one column to hold the representative data but it could possibly come back to bite down the road.
_______________________________________________________________
Need help? Help us help you.
Read the article at http://www.sqlservercentral.com/articles/Best+Practices/61537/ for best practices on asking questions.
Need to split a string? Try Jeff Modens splitter http://www.sqlservercentral.com/articles/Tally+Table/72993/.
Cross Tabs and Pivots, Part 1 – Converting Rows to Columns - http://www.sqlservercentral.com/articles/T-SQL/63681/
Cross Tabs and Pivots, Part 2 - Dynamic Cross Tabs - http://www.sqlservercentral.com/articles/Crosstab/65048/
Understanding and Using APPLY (Part 1) - http://www.sqlservercentral.com/articles/APPLY/69953/
Understanding and Using APPLY (Part 2) - http://www.sqlservercentral.com/articles/APPLY/69954/
August 25, 2011 at 1:50 pm
Sean Lange (8/25/2011)
I think varchar would be fine with one possible exception. DateTime. If you are only going to use this for reporting it might be ok as a varchar but i cringe at how painful varchar data representing datetime can be. In reference to your earlier post you may need to consider usage beyond the original intent. It may be ok to have a varchar field to hold the "data", especially if you have a datetime column to hold the datetime for when the change was made. If you decided an varchar you will be able to easily use one column to hold the representative data but it could possibly come back to bite down the road.
Sorry, I wasn't specific varchar for the columns that represent the columns in the table being tracked. The _Audit tables would have all of the columns that the table being tracked would have plus a few extra (change type, batch, etc.). My thought in capturing the related data was to avoid the need to create views on the _Audit tables to match it up when displaying the change. I have the luxury of being able to implement it on a limited basis and so should be able to catch any unexpected issues.
August 25, 2011 at 2:12 pm
If you're using stored procedures to make the changes and you plan to do the auditing from in there too, you may want to read up on the output clause. This gives you an easy means of doing a TEE on your DML: while your insert, update or delete statement modifies the data in the database, a copy of (parts of) the data that is modified are stored into another table, for example your audit table. You won't have to write separate select statements to collect the data for the audit, you simply get an additional output from your insert, update and delete statements.
August 26, 2011 at 9:04 am
FWIW, I've tried both approaches (sp-controlled audit vs. triggers) and found that the trigger approach had the least amount of care and feeding in the long run. Regardless of [what] changed the data (think: direct access to the table rather than via the app), the trigger will catch it. Meaning you don't have to chase down every proc that manipulates the table's data. Also, when a column change is needed, the trigger code really wasn't that hard to tweak. You'll probably find yourself using a 'template' code for the trigger script to accomodate the different data types of the columns. I wouldn't try automating the trigger implementation unless your table structure and the number of tables makes manually coding the triggers impossible.
It's taught me to design the audit table wisely as well... <grin> (Shameless plug for the KISS method)
August 26, 2011 at 11:14 am
Your Name Here (8/26/2011)
FWIW, I've tried both approaches (sp-controlled audit vs. triggers) and found that the trigger approach had the least amount of care and feeding in the long run. Regardless of [what] changed the data (think: direct access to the table rather than via the app), the trigger will catch it. Meaning you don't have to chase down every proc that manipulates the table's data. Also, when a column change is needed, the trigger code really wasn't that hard to tweak. You'll probably find yourself using a 'template' code for the trigger script to accomodate the different data types of the columns. I wouldn't try automating the trigger implementation unless your table structure and the number of tables makes manually coding the triggers impossible.It's taught me to design the audit table wisely as well... <grin> (Shameless plug for the KISS method)
So then go maybe split out the informaton concerning the objects that executed the change (ASPObject, SP, etc.) into a separate table that is updated via the sp and use the ProjectInformationChangeLog stricktly as a header table? Initially, I wanted to capture 'how' the data was updated in the event that issues cropped up.
September 1, 2011 at 11:31 am
In the original conceptualization, I was going to use the sp to create a record in the ChangeLog and then capture the values of the record before & after the change. I'm not very much leaning toward the old/new values being captured via triggers and the inserted/deleted recordsets.
Is there a means by which the record in the ChangeLog can be related to the specific changes captured by the triggers? I'd like to be able to show meta-data about the change (which sp was used, ASP.NET object that called it, etc.) and then the specific before/after records.
My thought is that there needs to be a value(s) that is available at the global level that both the sp and the triggers have access to since it isn't possible to pass values between them. Almost like a transaction number that uniquely identifies the global event and any secondary events (the sp is called which inserts a record into table [x], a trigger on table [x] then does [y], etc.)
I did find this which looks promising...
Viewing 14 posts - 16 through 28 (of 28 total)
You must be logged in to reply to this topic. Login to reply