July 22, 2008 at 3:20 pm
Lot's of good stuff there Gus... I just don't think any of it needs to audit the original insert... 🙂
--Jeff Moden
Change is inevitable... Change for the better is not.
July 23, 2008 at 7:20 am
If you just do the "after-only" logging, to save space in the log for a table that gets far more updates than inserts, you need the original data, or you can't track what was changed in the first update on a column.
If you don't need to know what was changed, you don't need to log the insert.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
July 23, 2008 at 7:28 am
GSquared (7/23/2008)
you need the original data, or you can't track what was changed in the first update on a column.
??? I'm not sure why you say that! We all know what a join is... 🙂
--Jeff Moden
Change is inevitable... Change for the better is not.
July 23, 2008 at 7:41 am
I don't follow you.
Here's the scenario I'm talking about:
Table1 has a logging trigger on it that follows the "after-only" model I wrote about, because it's usual use is 10-20 updates per insert, and this results in a smaller log table.
You insert data into it. This is NOT logged.
I update the data in it. This is logged, but only what I changed it to, not what it was changed from.
Management needs to know what it was changed from and to, as part of a report.
How does a join get that information in this scenario?
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
July 23, 2008 at 7:48 am
I can see GSquared's point, just don't seem to see another way to explain it at the moment. I'll keep thinking about it and if I come up with an idea, I'll post back.
😎
July 23, 2008 at 7:49 am
That scenario (like many) audits the wrong thing... you should only Audit the DELETED table... the current value is always in the original table.
--Jeff Moden
Change is inevitable... Change for the better is not.
July 23, 2008 at 9:33 am
Jeff Moden (7/23/2008)
That scenario (like many) audits the wrong thing... you should only Audit the DELETED table... the current value is always in the original table.
Actually, Jeff, I think that this depends on what the purpose (goals and objectives) of the Audit table is in the first place. There are many cases where the relationship between the source table and the Audit table needs to be one-way only (or where that is the Best Practice). I can think of three off the top of my head:
1) Recovery: Audit table is supposed to serve as a means to reconstruct a pre-existing table/database. Audit table is stored in a different DB & disk or is remote; in th eevent of an unauditable failure (Truncate, DB loss, etc.), the Audit table can be used to reconstruct the table(s).
2) Security Investigations: Audit table is supposed to facilitate security investigations without disrupting Production and must be in a seperate DB or server.
3) Data Warehousing: Not obvious but "Auditing" can also be used as the delta-feed for Type 2 DW tables. Obviously the Inserts would be needed here.
[font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
Proactive Performance Solutions, Inc. [/font][font="Verdana"] "Performance is our middle name."[/font]
July 23, 2008 at 10:49 am
We are creating the audit process for clients to track changes which means we have a screen for them to display the audit data. In that case, all the foreign keys(ProductID) will not make any sense to them unless we join the product table and get the ProductName for that ProductID. But when we join those tables, what if the record is already been deleted ? But i dono whther to join the product table while writing into audit table as a part of the changed record(as xml). Later we can just just query the audit table and we will have all the information in there for that record. This is like storing extra information which may cost us space and performance. But i donot see any other method to achieve it.
We have like 150 tables for which auditing has to be done. So i am planning to write a generic function or stored procedure to write into audit table and use it in all the triggers which is easy maintenance and i dont think its going to affect the performance much.
I have a previuos value and after value which are xml data types. I am auditing insert, delete and update. I have to audit the insert as I have to know the insert date and inser userid. And I cannot change the structure of the main tables to include insert date and userid. And also i need to capture the exact record during insertion as the record in the main table is not the exact record when inserted, it might have been changed number of times.
For insert and delete, i am capturing the entire record. So its easy as it just 'select * from inserted or deleted for xml' . For update, I am capturing just the columns that changed. say if productname and productnum has changed. Then the xml node shud just 2 columns. I have more that 15 columns in most of the tables. Is there a generic way to find out what columns are updated instead of going by each column name like if update(column name) or
isnull(i.columnname,d.columnname) ?
Thanks for your time. I appreciate it.
July 23, 2008 at 11:22 am
Jeff Moden (7/23/2008)
That scenario (like many) audits the wrong thing... you should only Audit the DELETED table... the current value is always in the original table.
Yes, that's one possible solution. But again, like RBarry said, it depends on what you want from the audit table.
I tend to build a lot of management reports on the audit data.
Let's say I need to run a report on how many orders were changed to "Holding" status last month.
In my system, logging either from-to, or just to, I just query the log table and I'm done. Because of the size of the table and the lack of indexes in it (for insert performance reasons), this may take a little while to run, but it's a simple select.
In your system, it's significantly more complex, and lower performance.
Here's a test I did. If you can improve the query for your method (deleted-only), please let me know. The table and initial data in it are from the test yesterday (this same thread).
create table dbo.LogTest2_Log3 (
LogID int identity primary key,
LogDate datetime not null default(getdate()),
ID int not null,
Col1 varchar(100),
Col2 varchar(100))
go
create trigger dbo.LogTest2_L3 on dbo.LogTest2
after update, delete
as
insert into dbo.LogTest2_Log3 (ID, Col1, Col2)
select id, col1, col2
from deleted
go
set nocount on
declare @Start datetime
select @start = getdate()
while datediff(minute, @start, getdate()) < 11 -- repeated, semi-random updates
begin
update dbo.logtest2
set col1 = abs(checksum(newid()))%100
where id = abs(checksum(newid()))%1000
waitfor delay '00:00:00.1'
end
go
set statistics io on
set statistics time on
;with CTE (ID, LastChange) as -- Deleted-only logging query
(select l3.id, max(past.logdate)
from dbo.logtest2_log3 l3
left outer join dbo.logtest2_log3 past
on l3.logdate > past.logdate
and l3.id = past.id
where l3.col1 = 5
group by l3.logid, l3.id)
select id
from cte
where lastchange >= '7/23/08 12:25 pm'
and lastchange < '7/23/08 12:26 pm'
/*
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 1 ms.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0,
read-ahead reads 0, lob logical reads 0, lob physical reads 0,
lob read-ahead reads 0.
Table 'LogTest2_Log3'. Scan count 2, logical reads 80, physical reads 0,
read-ahead reads 0, lob logical reads 0, lob physical reads 0,
lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 16 ms, elapsed time = 13 ms.
*/
select id -- To-From Logging query
from dbo.logtest2_log1
where col1_to = 5
and logdate >= '7/23/08 12:25 pm'
and logdate < '7/23/08 12:26 pm'
/*
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 1 ms.
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 1 ms.
Table 'LogTest2_Log1'. Scan count 1, logical reads 148, physical reads 0,
read-ahead reads 0, lob logical reads 0, lob physical reads 0,
lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 4 ms.
*/
As an addition to yesterday's test, this type of log ended up at 0.297 Meg, and the To-From log added 0.414 Meg, and the Inserted-Only log added 0.289, from the updates for this test.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
July 23, 2008 at 12:24 pm
If you have serious auditing requirements, deleted records aren't allowed. You "mark" them as logically deleted (some flag), but you keep them around.
Worse case, you set up a table to hold deleted records and move the data over when the record is deleted (deleted trigger) and use two queries for your audit reporting.
July 23, 2008 at 12:45 pm
Steve Jones - Editor (7/23/2008)
If you have serious auditing requirements, deleted records aren't allowed. You "mark" them as logically deleted (some flag), but you keep them around.Worse case, you set up a table to hold deleted records and move the data over when the record is deleted (deleted trigger) and use two queries for your audit reporting.
We're not talking about deleting data. We're talking about the "deleted" table in triggers.
I do agree with you.
One thing that's always bothered me: I can't think of a good way to audit/block the truncate command. Won't fire an on-delete trigger, won't even record the data in the transaction log, and there isn't an "instead of truncate" trigger. Anyone have any ideas on that one?
(Now there's an example of data that could easily be recovered from a log that records inserts, but not from one that doesn't.)
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
July 23, 2008 at 1:16 pm
GSquared (7/23/2008)
One thing that's always bothered me: I can't think of a good way to audit/block the truncate command. Won't fire an on-delete trigger, won't even record the data in the transaction log, and there isn't an "instead of truncate" trigger. Anyone have any ideas on that one?(Now there's an example of data that could easily be recovered from a log that records inserts, but not from one that doesn't.)
Actually, I think that I did mention that... 🙂
But, isn't there a DDL trigger for Truncate?
[font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
Proactive Performance Solutions, Inc. [/font][font="Verdana"] "Performance is our middle name."[/font]
July 23, 2008 at 1:29 pm
Hi Steve,
yep that makes sense. So when we dont delete the records from main table, then joining the audit and main table shoud have audit data for the users to track changes.
Thanks for your reply. I appreciate it.
July 23, 2008 at 1:34 pm
Sorry was responding to mailsar, who mentioned deleting data and joining back to tables.
I think you can trap the DDL for truncate, but not sure about the data. Perhaps a copy before delete works there. That's an interesting one.
July 23, 2008 at 2:16 pm
I'm looking at the BOL data on DDL triggers, and I don't see Truncate Table as one of the actions. Am I overlooking it?
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
Viewing 15 posts - 16 through 30 (of 46 total)
You must be logged in to reply to this topic. Login to reply