January 17, 2018 at 1:17 pm
I have a database environment to support that features instead-of triggers on views (shudder!!). They work but are quite inefficient and I cannot change the architecture at this point to eliminate them.
One of the purposes is to log data changes and it does so for each column for each data table by inserting into a single log table. The original trigger had an insert statement into the log table for each column. Since there can be a multitude of columns, this results in a separate insert for each column. There is an IF statement before each insert such that log inserts would only be executed if a column was modified (IF UPDATE([column name]) INSERT INTO....SELECT...).
I wanted to turn this into a single INSERT INTO and the SELECT for each column be combined with a UNION ALL between them (INSERT INTO...SELECT....UNION ALL SELECT...., etc.). This would only be effective if I could use the UPDATE() function in the WHERE clause of each column's SELECT statement. However, it appears that although UPDATE() is valid for conditional logic such as an IF statement, I am not sure that it's valid for use within a WHERE clause.
If only a few columns are updated, then using multiple INSERT INTO statements isn't too bad, but if a majority of the columns are updated then I could have lots of INSERT INTO statements into the same log table being executed. By combining them I would gain performance efficiency, but only if the UPDATE() function is valid within a WHERE clause. Any thoughts?
LinkedIn: https://www.linkedin.com/in/sqlrv
Website: https://www.sqlrv.com
January 17, 2018 at 4:52 pm
Aaron N. Cutshall - Wednesday, January 17, 2018 1:17 PMI have a database environment to support that features instead-of triggers on views (shudder!!). They work but are quite inefficient and I cannot change the architecture at this point to eliminate them.One of the purposes is to log data changes and it does so for each column for each data table by inserting into a single log table. The original trigger had an insert statement into the log table for each column. Since there can be a multitude of columns, this results in a separate insert for each column. There is an IF statement before each insert such that log inserts would only be executed if a column was modified (IF UPDATE([column name]) INSERT INTO....SELECT...).
I wanted to turn this into a single INSERT INTO and the SELECT for each column be combined with a UNION ALL between them (INSERT INTO...SELECT....UNION ALL SELECT...., etc.). This would only be effective if I could use the UPDATE() function in the WHERE clause of each column's SELECT statement. However, it appears that although UPDATE() is valid for conditional logic such as an IF statement, I am not sure that it's valid for use within a WHERE clause.
If only a few columns are updated, then using multiple INSERT INTO statements isn't too bad, but if a majority of the columns are updated then I could have lots of INSERT INTO statements into the same log table being executed. By combining them I would gain performance efficiency, but only if the UPDATE() function is valid within a WHERE clause. Any thoughts?
My thoughts:
1) column-level auditing is just horrible
2) column-level auditing like this is extra horrible.
3) it should take you about 5.3 minutes to set up a test to see if you can do what you want to do.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
January 17, 2018 at 5:30 pm
TheSQLGuru - Wednesday, January 17, 2018 4:52 PMAaron N. Cutshall - Wednesday, January 17, 2018 1:17 PMI have a database environment to support that features instead-of triggers on views (shudder!!). They work but are quite inefficient and I cannot change the architecture at this point to eliminate them.One of the purposes is to log data changes and it does so for each column for each data table by inserting into a single log table. The original trigger had an insert statement into the log table for each column. Since there can be a multitude of columns, this results in a separate insert for each column. There is an IF statement before each insert such that log inserts would only be executed if a column was modified (IF UPDATE([column name]) INSERT INTO....SELECT...).
I wanted to turn this into a single INSERT INTO and the SELECT for each column be combined with a UNION ALL between them (INSERT INTO...SELECT....UNION ALL SELECT...., etc.). This would only be effective if I could use the UPDATE() function in the WHERE clause of each column's SELECT statement. However, it appears that although UPDATE() is valid for conditional logic such as an IF statement, I am not sure that it's valid for use within a WHERE clause.
If only a few columns are updated, then using multiple INSERT INTO statements isn't too bad, but if a majority of the columns are updated then I could have lots of INSERT INTO statements into the same log table being executed. By combining them I would gain performance efficiency, but only if the UPDATE() function is valid within a WHERE clause. Any thoughts?
My thoughts:
1) column-level auditing is just horrible
2) column-level auditing like this is extra horrible.
3) it should take you about 5.3 minutes to set up a test to see if you can do what you want to do.
I absolutely agree but what alternative would you suggest for tables that necessarily (well, according to the people that designed them and it's way too far down the road to change them) contain 148 columns and we are required to audit all updates, which typically happen in sets of only 3 or 4 columns at a time? We have several such wide tables (usually just 1 row will fit on a page) with column-level triggers (thankfully, they're not "Instead Of" triggers) and if you know of a way to avoid recording the whole bloody row when only 4 columns are updated and still avoid column-level auditing, I am genuinely all ears.
--Jeff Moden
Change is inevitable... Change for the better is not.
January 17, 2018 at 8:12 pm
Just making statements of fact - nothing more sadly for you.
I think you should spin it up in XML in the engine inside the trigger. I hear that is very efficient. :Whistling::Wow:
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
January 18, 2018 at 5:23 am
TheSQLGuru - Wednesday, January 17, 2018 8:12 PMJust making statements of fact - nothing more sadly for you.I think you should spin it up in XML in the engine inside the trigger. I hear that is very efficient. :Whistling::Wow:
Heh... or JSON. 😉
--Jeff Moden
Change is inevitable... Change for the better is not.
January 18, 2018 at 6:15 am
What about unpivoting the table? You might need to do some data type conversions, but here's a sample code.
CREATE TABLE [dbo].[SalesOrders]
(
[SalesOrderID] [int] NOT NULL,
[RevisionNumber] [tinyint] NOT NULL,
[OrderDate] [datetime] NOT NULL,
[DueDate] [datetime] NOT NULL,
[ShipDate] [datetime] NULL,
[Status] [tinyint] NOT NULL,
[OnlineOrderFlag] [bit] NOT NULL,
[SalesOrderNumber] [nvarchar] (25) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
[PurchaseOrderNumber] [nvarchar] (25) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[AccountNumber] [nvarchar] (15) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[CustomerID] [int] NOT NULL,
[SalesPersonID] [int] NULL,
[TerritoryID] [int] NULL,
[BillToAddressID] [int] NOT NULL,
[ShipToAddressID] [int] NOT NULL,
[ShipMethodID] [int] NOT NULL,
[CreditCardID] [int] NULL,
[CreditCardApprovalCode] [varchar] (15) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[CurrencyRateID] [int] NULL,
[SubTotal] [money] NOT NULL,
[TaxAmt] [money] NOT NULL,
[Freight] [money] NOT NULL,
[TotalDue] [money] NOT NULL,
[Comment] [nvarchar] (128) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[rowguid] [uniqueidentifier] NOT NULL,
[ModifiedDate] [datetime] NOT NULL,
[JobTitle] [nvarchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL
)
INSERT INTO dbo.SomeAuditTable
SELECT c.ColumnName,
c.OldValue,
c.NewValue
FROM Inserted AS i
JOIN Deleted AS d ON i.SalesOrderID= d.SalesOrderID
CROSS APPLY (VALUES( 'BillToAddressID ', d.BillToAddressID , i.BillToAddressID ),
('RevisionNumber ', d.RevisionNumber , i.RevisionNumber ),
('OrderDate ', d.OrderDate , i.OrderDate ),
('DueDate ', d.DueDate , i.DueDate ),
('ShipDate ', d.ShipDate , i.ShipDate ),
('Status ', d.Status , i.Status ),
('OnlineOrderFlag ', d.OnlineOrderFlag , i.OnlineOrderFlag ),
('SalesOrderNumber ', d.SalesOrderNumber , i.SalesOrderNumber ),
('PurchaseOrderNumber ', d.PurchaseOrderNumber , i.PurchaseOrderNumber ),
('AccountNumber ', d.AccountNumber , i.AccountNumber )) c(ColumnName, OldValue, NewValue)
WHERE c.OldValue <> c.NewValue
The code is messy, but I'm not fixing it on this buggy forum software.
Here's a reference to the upivot technique used. An Alternative (Better?) Method to UNPIVOT (SQL Spackle) - SQLServerCentral
January 18, 2018 at 6:30 am
Thanks for all the replies. I did find a solution that suited my needs. I still generate a separate select statement for each column (I may look at the unpivot idea) but my concern was determining which columns were updated regardless of which row and where I didn't have to rely on the UPDATED() function. Instead I turned to the COLUMNS_UPDATED() function and with some application of set-based logic I managed to generate a table of records that represent only those columns which were updated:
DECLARE @Updated TABLE (
ColNbr INT,
ColName VARCHAR(1024)
);
INSERT INTO @Updated (ColNbr, ColName)
SELECT column_id, name
FROM sys.columns
WHERE object_id = OBJECT_ID(@MyTableName)
AND SUBSTRING(COLUMNS_UPDATED(), ((column_id-1)/8)+1, 1) & (POWER(2, (column_id-1)%8)) <> 0;
With this, I could limit my SELECT statements by including:
INNER JOIN @Updated u
ON u.ColNbr = 51 -- Use the column_id here that you want to reference
This is not an elegant approach but, believe me, it's better than what was there before. Sometimes an iterative approach to improvement is better for significant gains so they can get implemented quickly than spending a lot of time to get it absolutely perfect and having it run terribly the whole time.
LinkedIn: https://www.linkedin.com/in/sqlrv
Website: https://www.sqlrv.com
January 18, 2018 at 7:47 am
Aaron N. Cutshall - Thursday, January 18, 2018 6:30 AMInstead I turned to the COLUMNS_UPDATED() function and with some application of set-based logic I managed to generate a table of records that represent only those columns which were updated:
Ah, be careful, Aaron... I had considered that when I reworked our audit triggers and quickly decided that was a form of "Death by SQL" because there's no guarantee that someone won't drop a new column in the middle of the existing columns nor any guaranteed that they won't delete a column and I'm not sure that column numbers are as sacred as some would believe. I did not, however, test it so I could definitely be wrong here but it's worth making sure.
--Jeff Moden
Change is inevitable... Change for the better is not.
January 18, 2018 at 7:59 am
Jeff Moden - Thursday, January 18, 2018 7:47 AMAh, be careful, Aaron... I had considered that when I reworked our audit triggers and quickly decided that was a form of "Death by SQL" because there's no guarantee that someone won't drop a new column in the middle of the existing columns nor any guaranteed that they won't delete a column and I'm not sure that column numbers are as sacred as some would believe. I did not, however, test it so I could definitely be wrong here but it's worth making sure.
Thanks for the warning Jeff. Fortunately, this trigger gets regenerated each time there's a DDL change and it's built using metadata.
Also, a big thanks to Luis for the CROSS APPLY idea. I was able to take your example and rework it to suit my needs. It certainly is a much more elegant approach!
And Kevin, I have to agree with Jeff in that no matter how repugnant I find the entire use of triggers, sometimes they're the only thing that will get the job done. After all, isn't that the whole point of what we're doing?
LinkedIn: https://www.linkedin.com/in/sqlrv
Website: https://www.sqlrv.com
January 18, 2018 at 8:28 am
Aaron N. Cutshall - Thursday, January 18, 2018 7:59 AMThanks for the warning Jeff. Fortunately, this trigger gets regenerated each time there's a DDL change and it's built using metadata.
That's a good idea, Aaron. It's ok to use Columns_Updated as a driver for such things but I've seen people store it as a part of the audit instead of the actual column name. Makes a mess if you ever have to go back to see what happened.
--Jeff Moden
Change is inevitable... Change for the better is not.
January 18, 2018 at 10:04 pm
Jeff Moden - Thursday, January 18, 2018 8:27 AMAaron N. Cutshall - Thursday, January 18, 2018 7:59 AMThanks for the warning Jeff. Fortunately, this trigger gets regenerated each time there's a DDL change and it's built using metadata.That's a good idea, Aaron. It's ok to use Columns_Updated as a driver for such things but I've seen people store it as a part of the audit instead of the actual column name. Makes a mess if you ever have to go back to see what happened.
Changing a trigger "SHOULD BE" part of the impact analysis done when changing any table columns.
HAHAHAHAHAH!!! I made myself laugh with that one! Does ANYONE do an impact analysis for schema changes?? :blink:
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
January 18, 2018 at 10:41 pm
TheSQLGuru - Thursday, January 18, 2018 10:04 PMJeff Moden - Thursday, January 18, 2018 8:27 AMAaron N. Cutshall - Thursday, January 18, 2018 7:59 AMThanks for the warning Jeff. Fortunately, this trigger gets regenerated each time there's a DDL change and it's built using metadata.That's a good idea, Aaron. It's ok to use Columns_Updated as a driver for such things but I've seen people store it as a part of the audit instead of the actual column name. Makes a mess if you ever have to go back to see what happened.
Changing a trigger "SHOULD BE" part of the impact analysis done when changing any table columns.
HAHAHAHAHAH!!! I made myself laugh with that one! Does ANYONE do an impact analysis for schema changes?? :blink:
Heh... you bet your sweet bippy! We do an impact analysis even for just adding a column. It's amazing how much poorly written legacy code can break doing such a simple thing (of course, we found that out the hard way the first time). We also check for audit trigger breaks (I built code to automatically regenerate the audit triggers because of it), do a storage analysis, and an outage analysis (for time to deploy including any defragging we may have to do because of page splits and the like) regardless of the size of the table as well as a roll back plan if the post deployment sanity checks show a problem (they never do because of all the testing we do before it gets to prod but I still require the proverbial "Plan B", just in case). I also do a "future" storage impact analysis so I can give the infrastructure folks plenty of time to buy new disks if there's a need.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 12 posts - 1 through 11 (of 11 total)
You must be logged in to reply to this topic. Login to reply