Triggers to Protect tracing data?

  • Hi All,

    I know triggers receive bad press, and I follow the logic behind that. But at the risk of looking really silly, I was inquiring as to whether they have any practical value for protecting auditing/logging data - for instance, triggers on UPDATE and DELETE simply rolling back the transaction - regardless, done to ensure no changes are made at all to the tables. I realise that this falls short in the follow: 1) security could be used to achieve such; 2) it is still only as safe as the triggers. I was wondering if there are additional very bad reasons I have not yet thought of - and the ideas of experts are certainly welcome assistance.

    Many thanks for any ideas or alternatives,

    Steve

  • triggers have their purpose...i think the bad rap triggers get come from people trying to do stuff that should never occur in side a trigger anyway.

    I've seen requests on opening an excel document in a trigger, changing it, then emailing it out....then closing excel.

    I understand the business need to make a document and email it, but it was hard to get that person to understand that it doesn't need to be done inside a trigger.

    auditing data and preventing deletes...oh yeah I agree a trigger is a good way to go.

    Lowell


    --help us help you! If you post a question, make sure you include a CREATE TABLE... statement and INSERT INTO... statement into that table to give the volunteers here representative data. with your description of the problem, we can provide a tested, verifiable solution to your question! asking the question the right way gets you a tested answer the fastest way possible!

  • [font="Verdana"]Oh, I love this question!

    On your first look, triggers seem like the way to go. They can be used to guarantee that whenever you make a change to the table, a record of the change (when it was made, who it was made by, what was changed) can be kept. Right?

    Or... not.

    Several issues come into play here. The first is the "user proxy" issue. Namely, in n-tier applications, the "user" who connects to the database may not actually be the user who is making the modifications. Often the connections are held by a middle tier (the application logic tier) and it will likely be using a shared pool of connections under some sort of application user.

    Now using a trigger you can use system_user to get the operating system login name. But all that tells you is the login name of who has the connection. So in an n-tier application, that instantly becomes valueless. All of your data changes will be made by the application! Can you tell who the person driving the application at that time was? No. Well, not in a trigger, anyway.

    Secondly, using triggers encourages developers to write directly to tables. What happens if the table needs to be changed? Split apart into two tables? Merged with another? All of your subsequent code falls apart. From an ongoing maintenance point of view, this results in a fragile design.

    Thirdly, if you evaluate the performance of triggers, there is an overhead. They do not happen magically: they take time. They extend the lifetime of the transaction, in a non-obvious way.

    Finally, I was alarmed to learn (after evaluating triggers within an existing application) that it is possible to make a change to a table, and have the sunsequent trigger fail, but still leave the original data modification in place! This can happen for all sorts of reasons (buggy code, timeouts, deadlocks for example.) That means you can make changes to the data without it being logged.

    That means if someone wanted to bypass your auditing, all they need to do is open one connection, take out an exclusive lock on your audit table, open another connection, make the changes to the table. And now you have unlogged data changes.

    So in fact, triggers are not a design choice I would make for auditing. I come back over and over again to recommending gateway procedures (i.e. a stored procedure that is used to make all of the changes to table data). Smart use of gateway procedures can solve all of the issues I mentioned, as well as make it easier to secure your database.

    [/font]

  • Bruce W Cassidy (2/8/2009)


    Several issues come into play here. The first is the "user proxy" issue. Namely, in n-tier applications, the "user" who connects to the database may not actually be the user who is making the modifications. Often the connections are held by a middle tier (the application logic tier) and it will likely be using a shared pool of connections under some sort of application user.

    Now using a trigger you can use system_user to get the operating system login name. But all that tells you is the login name of who has the connection. So in an n-tier application, that instantly becomes valueless. All of your data changes will be made by the application! Can you tell who the person driving the application at that time was? No. Well, not in a trigger, anyway.

    Not easily ... but it can be done via a workaround using context_info.

    Secondly, using triggers encourages developers to write directly to tables. What happens if the table needs to be changed? Split apart into two tables? Merged with another? All of your subsequent code falls apart. From an ongoing maintenance point of view, this results in a fragile design.

    I certainly agree that using stored procs is the best way to provide access to (and secure) a database, but I don't think that this argument can be directed against triggers.

    Finally, I was alarmed to learn (after evaluating triggers within an existing application) that it is possible to make a change to a table, and have the sunsequent trigger fail, but still leave the original data modification in place! This can happen for all sorts of reasons (buggy code, timeouts, deadlocks for example.) That means you can make changes to the data without it being logged.

    That means if someone wanted to bypass your auditing, all they need to do is open one connection, take out an exclusive lock on your audit table, open another connection, make the changes to the table. And now you have unlogged data changes.

    Can you provide scripts etc to show this? Triggers operate under xact_abort on by default, so any error should rollback the transaction, even if it that transaction was started outside of the trigger. If a trigger was unable to obtain a lock to update an audit table, it should fail. Can someone work around this if they want to? Yes, using table variables and I think xact_abort off in sql2005/2008 - but that is a conscious decision, and not a general trigger problem. In fact it's probably used mostly to allow an audit record to be created even if the transaction fails.

  • matt stockham (2/8/2009)


    Not easily ... but it can be done via a workaround using context_info.

    [font="Verdana"]Agreed. Done it, in fact. A design that requires tricky work-arounds isn't my preferred design.[/font]

    matt stockham (2/8/2009)


    I certainly agree that using stored procs is the best way to provide access to (and secure) a database, but I don't think that this argument can be directed against triggers.

    [font="Verdana"]The issue is that the developers are working with the tables directly. There is no interface. So when the design of the database needs to change, you suddenly have to create different tables and present views and use instead of triggers on the views so that all of the code that relied on the old table structure doesn't break.

    This isn't an issue with triggers per se, but an issue with allowing direct access to tables. But if you use gateway procedures to solve this issue by creating an interface, you don't need triggers.[/font]

    matt stockham (2/8/2009)


    Can you provide scripts etc to show this?

    [font="Verdana"]Nope. It was a complicated scenario to set up, and I'm a bit busy to spend the time on it just to prove a point.

    I did the work on this in a company several years ago now who had a database system that relied on triggers for auditing, and it was against SQL Server 2000. So things may have changed since then (in fact, I rather hope they have). I showed them that around 7% of their critical transaction data was missing audit details, which could only be happening because the trigger was failing, but the original insert still happened.

    I am quite willing to be proved wrong! So if you can show me that triggers are guaranteed to execute, and if a trigger fails the original transaction that started the trigger will be rolled back, then I will bow before your superior knowledge. :D[/font]

  • here's a situation where I've expected a trigger to rollback because I raised an error.

    http://www.sqlservercentral.com/Forums/Topic622424-338-1.aspx

    I've also seen triggers still commit even though a Level 14 error was raised by the system.

    Besides that, if you are doing an n-tier application, you are most likely handling logins/security yourself. My app, for example, logs every change that passes thru it's data-layer to an audit table anyway...but triggers can help capture changes that do not go through your n-tiered app, like ad hoc updates, or direct calls to stored proc. Both audits have their place, depending on your level of paranoia for tracking changes.

    Lowell


    --help us help you! If you post a question, make sure you include a CREATE TABLE... statement and INSERT INTO... statement into that table to give the volunteers here representative data. with your description of the problem, we can provide a tested, verifiable solution to your question! asking the question the right way gets you a tested answer the fastest way possible!

  • Lowell (2/8/2009)


    [...]triggers can help capture changes that do not go through your n-tiered app, like ad hoc updates, or direct calls to stored proc. Both audits have their place, depending on your level of paranoia for tracking changes.

    [font="Verdana"]I'd rather use security to ensure ad-hoc data changes don't happen at all.

    Having said that, there are some awesome new auditing capabilities in the log reader functionality of SQL 2008. I haven't had a chance to play with them yet though. :([/font]

  • I don't want to derail the topic, but with regard to triggers not rolling back if raiserror is used, BOL (http://msdn.microsoft.com/en-us/library/ms178592.aspx) states

    When RAISERROR is run with a severity of 11 or higher in a TRY block, it transfers control to the associated CATCH block. The

    error is returned to the caller if RAISERROR is run:

    Outside the scope of any TRY block.

    With a severity of 10 or lower in a TRY block.

    With a severity of 20 or higher that terminates the database connection.

    In other words, unless raiserror is terminating the connection it isn't actually a transaction error - processing continues. It can still pass a message to the calling process. A constraint error, divide by zero error etc. would rollback the transaction.

    I have only verified this for 2005 so I can't speak for 2000 - quite possibly there are issues there, and try/catch did not exist until 2005. My guess would be that any transaction issues of this kind were either a misunderstanding on the part of the developer or a real issue that was fixed via service pack.

    I am quite willing to be proved wrong! So if you can show me that triggers are guaranteed to execute, and if a trigger fails the original transaction that started the trigger will be rolled back, then I will bow before your superior knowledge.

    In theory the above is true - I cannot prove it, but I have not seen any evidence to refute it. It only takes one case to falsify it, I throw down the gauntlet! 🙂

  • [font="Verdana"]Thanks Matt.

    That rings a bell. As you said, in SQL Server 2000, there was no try/catch. You could sort of handle errors by checking @@error after every statement.

    The issue we had was around nested transactions. If you embed a rollback inside the trigger, it will rollback ALL transactions (not just the inner-most.) So you had to use a savepoint to get around it.

    That meant to correctly handle errors in triggers, we created a gateway procedure that had the correct rollback to savepoint code, so that we could have nested transactions and handle rolling back the inner-most transaction and allow the calling application to retry the failed part without losing everything.

    So we ended up with a gateway procedure anyway, and I lobbied for just doing away with the code for triggers and putting it all into the procedure. It simplified the execution and error handling, avoided the overhead of triggers, and made the transaction handling more transparent.

    From memory, the issue with trigger overhead is this:

    You take all of your data, and insert it into the table. Then within the trigger, you read it back out of the table. Yes, it's likely to be all cached, so not an issue, right?

    Except all of the triggers they had also had business logic encoded in them. Which meant they read from other (related) tables. Which meant the triggers were actually generating substantial read I/Os, and worse, all of the reads were within the context of the transaction so they were also generating substantial locks!

    In the situation where you are using triggers solely for auditing, that shouldn't be an issue and the overhead would be negligible. But you still have the issue of correctly handling trigger failures.[/font]

  • Hi All,

    Thanks heaps for all the information - it's fascinating looking at all the internals of the situation.

    One thing though - I am still wondering a little about whether my original suggestion was possible, using something like:

    ALTER TRIGGER [Logging].[MasterLogTrigger]

    ON [Logging].[MasterLog]

    AFTER DELETE,UPDATE

    AS

    BEGIN

    ROLLBACK TRANSACTION;

    END

    The point of this is purely to stop an updates / deletes being made to the audit tables.

    Regards,

    Steve

  • [font="Verdana"]I'd still want to do that through declaritive security. But... give it a go! Let us know the results. :D[/font]

  • scraze (2/9/2009)


    One thing though - I am still wondering a little about whether my original suggestion was possible, using something like:

    ALTER TRIGGER [Logging].[MasterLogTrigger]

    ON [Logging].[MasterLog]

    AFTER DELETE,UPDATE

    AS

    BEGIN

    ROLLBACK TRANSACTION;

    END

    The point of this is purely to stop an updates / deletes being made to the audit tables.

    It will work, but as Bruce said - it's not the best solution. One major problem is that the trigger won't fire if someone truncates the table (although that would require the user to have truncate permissions, and truncate can be prevented if required).

Viewing 12 posts - 1 through 11 (of 11 total)

You must be logged in to reply to this topic. Login to reply