June 9, 2010 at 9:06 am
Great article, thank you.
This will all work well when the delete is accomplished via the proc, but what happens when the delete is ad hoc from Enterprise Manager or other? Will the trigger fail or will it pull data from an existing CONTEXT_INFO and put incorrect values in the audit table?
Just my thoughts,
Steve
June 9, 2010 at 9:09 am
christopher.cormier (6/9/2010)
This may be a faily easy quesion, but aside from the business requirement are there any performance benefits of putting the audit trail in a trigger? I'm fairly new to SQL, but if the only way for users to delete a record is to use the mentioned stored procedure I think I would leave the audit trail piece in the procedure itself.Are there any performance/security benefits of using triggers for audit trails?
Thanks!
We use the exact same method for auditing of deletions. Triggers are used to audit the row deletions of related (i.e., child) tables via a Foreign Key (FK) cascade delete. A very common situation. Especially in a hierarchical tree structure.
June 9, 2010 at 9:10 am
well if the Context_info is not set then NULL will be entered into the audit, which might be a giveaway that the DBA ADMIN has been messing with prod data provided triggers are used.
On th eother hand if the proc is doing all the auditing directly then a nulle could mean anything
June 9, 2010 at 9:16 am
sadair (6/9/2010)
Great article, thank you.This will all work well when the delete is accomplished via the proc, but what happens when the delete is ad hoc from Enterprise Manager or other? Will the trigger fail or will it pull data from an existing CONTEXT_INFO and put incorrect values in the audit table?
Just my thoughts,
Steve
Depending upon how the code in the trigger is written as it relates to CONTEXT_INFO, it will either fail or succeed. In our situation, we have code in the trigger to validate that the value in CONTEXT_INFO is correct.
As for ad hoc row deletions via SSMS, we don't do it against our production systems. When the situation is really needed, we use a T-SQL script that properly sets CONTEXT_INFO before calling the stored procedure or performs the actual row deletion. The audit trail is then correct as the information will record that someone other than a customer user performed the deletion.
June 9, 2010 at 9:19 am
Mauve (6/9/2010)[hrWe use the exact same method for auditing of deletions. Triggers are used to audit the row deletions of related (i.e., child) tables via a Foreign Key (FK) cascade delete. A very common situation. Especially in a hierarchical tree structure.
So I guess in an ideal situation there may be several layers of auditing both from the main table and linked tables. Would it make sense to add a conditional to the trigger such that if nothing was added in the CONTEXT_INFO you would grab SYSTEM_USER? That way if the information was blank from running a query directly you would still be able to trace down who deleted the record(s).
June 9, 2010 at 9:23 am
providing you keep the number series separate ,
and can easily distinguish between them ,
and provided loads of people don't login using the same account,
and that these are likely to be true next week, month, year,..
June 9, 2010 at 9:42 am
Why not just insert the row into the audit table first from SP directly and delete the actual table since you have all the needed info within SP itself? Using the trigger and context_info is complicating the code for no good reason that I can see based on the info in the article.
I'm a big fan of KIS (keep it simple) principal. If you can avoid the triggers in the first place, absolutely avoid them.
June 9, 2010 at 9:55 am
minimyme (6/9/2010)
I'm a big fan of KIS (keep it simple) principal. If you can avoid the triggers in the first place, absolutely avoid them.
Auditing is normally performed in triggers
1) for security purposes so the action is audited however the action is performed and (pretty much) whoever it is performed by and
2) because the auditing logic is decoupled from the deletion logic.
Dave.
June 9, 2010 at 10:19 am
Hi Mahesh,
Context_info is unique per session. So, when multiple users are working on product, system will create separate session for each session. So there is no any issue.
Thanks,
Tejas
SQLYoga.com
Tejas Shah
June 9, 2010 at 10:26 am
Why not just insert the row into the audit table first from SP directly and delete the actual table since you have all the needed info within SP itself? Using the trigger and context_info is complicating the code for no good reason that I can see based on the info in the article.
I'm a big fan of KIS (keep it simple) principal. If you can avoid the triggers in the first place, absolutely avoid them.
Tejas:
In Application, it might happen that we allow the user to delete record from multiple pages OR sometimes some record will be deleted based on business condition. In any case, we need to make sure that we have audit data, who deleted and the reason?
To make sure that we have audit data, trigger is a place where we can write a code to keep audit data when any information of that table is being changed.
Thanks,
Tejas
SQLYoga.com
Tejas Shah
June 9, 2010 at 10:53 am
Interesting article, thanks. I'm not sure I will end up using it, but I will keep it in the back of my mind for when an issue might come up.
June 9, 2010 at 11:47 am
john_wong_ca (6/9/2010)
That's a good idea. This will work for most of cases but not for replication. If a table is published in a merge publication, while replication agent is modifying the data, the connection of the agent will use context info as part of the identification of merge agent. In this case, the context info passed in the trigger will not be xml conversable.:-)
absolutely right :w00t:
a solution that will work for replication is to add username and reason fields to the table, populate them with your delete proc then select them from DELETED in the trigger.
on the same topic, heres a cool slice I wrote to figure out what field changed to activate the trigger, in case you dont REALLY want to fire triggers generating history when a certain set of fields is updated, as in the case of the darned replication id... (intended to be inserted into the trigger whose action you want to modify based on the field(s) updated)
BEGIN TRANSACTION
declare @NameToCheck varchar(8)
set @NameToCheck = 'val3' --this is the column name to check for singleton update, set to ModifiedDate,
--if we need to check any combination of values, just expand this routine to compensate
declare @checksum int --holds the int value of two raised to (the ordinal position of this column minus one)
declare @ChangeSum int --holds the int value of the COLUMNS_UPDATED varbinary
--------------------------------------------------------------------------------------------------------------------------
--this is the guts of it, as you can see it is only really one line
--generate the checksum
select
@checksum = power(2,colorder-1) --colorder index is 1 based, switch to 0 based
from syscolumns where
id = (select id --this is the column id of the column we want to watch
from sysobjects where
name = (select object_name(parent_obj) --the name of the table to which this trigger is attached
from sysobjects where xtype = 'tr' --the trigger type label
and name = object_name(@@PROCID) --the name of this trigger))
AND name = @NameToCheck --is the column we are watching
--if more than one field needs checked, just do again and add the check sums together... or do some other clever thing...
---------------------------------------------------------------------------------------------------------------------------
--recover the change sum, the value of the bit mask
set @ChangeSum = cast(COLUMNS_UPDATED()as int)
--create meta label for the history record
declare @newmeta varchar(200)
set @newmeta = '@CheckSum=' + cast(@CheckSum as varchar(8)) + ' '
set @newmeta = @newmeta + '@ChangeSum=' + cast(@ChangeSum as varchar(8))
--here we check if our pattern of change occurred
if(@CheckSum = @ChangeSum)
begin
set @newmeta = @newmeta + ' update only in ' + @NameToCheck + ' where we watched'
goto createHistory
end
--otherwise
set @newmeta = @newmeta + ' some other update'
createHistory:
INSERT INTO jonathan1H
( val1, val2, val3, modDate, meta )
SELECT val1, val2, val3, getdate(), @newmeta
FROM deleted
COMMIT TRANSACTION
June 9, 2010 at 8:20 pm
In light of the other posts, this is just to clarify a comment you made in your article. You stated, "the first character is removed and then converted into an XML documnet." The subsequent code line was:
SELECT @XML = CAST(REPLACE(@VarXML,CHAR(0),'') AS XML)
However, the CHAR(0) does not replace the first character, but defines an ASCII 000 character, sometimes referred to as a NULL (not associated with a SQL NULL value) or end-of-document character.
Thus, this line of code is replacing an ASCII 0 with an empty string character (which is probably located at the end of the string, not the first character.
June 11, 2010 at 2:19 pm
Hello,
I think the example is not the luckiest as once you have a stored proc that deletes the record it could take care of the auditing part as well.
Adding auditing to a DB where DMLs commands are not wrapped into storedprocs is more tricky.
Áron-Hungary
Viewing 14 posts - 16 through 28 (of 28 total)
You must be logged in to reply to this topic. Login to reply