Instead of Triggers - The good and the bad

  • Greetings - I have been asked by our development team about the potential use of Instead of Triggers in conjunction with our current architecture and I am hoping that I can get some feedback on "the good and the bad" of using them and any supporting documentation that you might have. Our utilization would be fairly limited but I still want to ensure that we are not going to be doing damage to our overall performance. I will be doing some testing as well and I will share that information as I come up with it.

    Thanks in advance.

    David

    @SQLTentmaker

    “He is no fool who gives what he cannot keep to gain that which he cannot lose” - Jim Elliot

  • I have never come across an instance where I found that INSTEAD OF triggers were necessary. Why are they proposing INSTEAD OF triggers?

    If you check out the discussion of this article[/url] (shameless plug for an article I wrote), there are not any concrete reasons for NOT using INSTEAD OF triggers, but most folks haven't run into a situation where they are needed.

  • [font="Verdana"]How best to put this...

    You might consider "database architecture" to be the parent of "database design". The architecture you choose will (to some degree) dictate how you then go about designing databases. So getting the architecture right can be vital.

    I'd need to understand that business needs that are driving even looking at "instead of triggers" at the architectural level. But my initial reaction is as follows.

    Firstly, I'm not sure any database architecture should go down to the level of implementation level. That's more database design.

    Secondly, architecture is concerned about thinking ahead and planning up-front. A trigger solution, on the other hand, is the complete opposite of this: a catch-all, "amubulance at the bottom of the cliff," last resort type of solution. It's not a good choice for an architecture.

    The only reason I have seen for triggers other than to encapsulate business logic is to handle data manipulation over views. I wouldn't recommend this. Instead, use stored procedures to do all of your data entry. But to re-iterate, this is database design, not database architecture.[/font]

  • Thanks Jack! I will check out the discussion. I had just read the article prior to posting. Great article by the way.

    Need is a big word that most people don't understand and no, this is not a need more of a convenience.

    Disclaimer: I did not create this architecture. It's not all bad but...

    With that being stated, we have views that sit on top of our base tables. Most of them are just straightforward views but it allows for our web app developers to go to one spot to find "objects" quickly. So, with that they are looking to start using the views in an updatable fashion for inserts and updates. The problem comes when we start doing deletes as much of our data is actually retained rather than deleted but it is logically deleted (i.e. a column that denotes whether it is valid - bit type column). So, they are proposing using instead of triggers in the case of deletes to avoid actually deleting the data and processing an update to our logically deleted bit mask instead. Somehow this is supposed to make their work in the object layer a bit easier but that is well outside my realm of understanding.

    Hopefully this makes sense. We are just at the beginning of discussion as it was just thrown out as an idea / solution and I am trying to educate myself so that if I do need to stand against such changes I can do so effectively. Can't really make the argument in favor of using it myself.

    David

    @SQLTentmaker

    “He is no fool who gives what he cannot keep to gain that which he cannot lose” - Jim Elliot

  • Bruce W Cassidy (2/3/2009)


    [font="Verdana"]How best to put this...

    You might consider "database architecture" to be the parent of "database design". The architecture you choose will (to some degree) dictate how you then go about designing databases. So getting the architecture right can be vital.

    I'd need to understand that business needs that are driving even looking at "instead of triggers" at the architectural level. But my initial reaction is as follows.

    Firstly, I'm not sure any database architecture should go down to the level of implementation level. That's more database design.

    Secondly, architecture is concerned about thinking ahead and planning up-front. A trigger solution, on the other hand, is the complete opposite of this: a catch-all, "amubulance at the bottom of the cliff," last resort type of solution. It's not a good choice for an architecture.

    The only reason I have seen for triggers other than to encapsulate business logic is to handle data manipulation over views. I wouldn't recommend this. Instead, use stored procedures to do all of your data entry. But to re-iterate, this is database design, not database architecture.[/font]

    Yeah, couldn't agree more. Not a fan of triggers in any way.

    With that being said though do you have any experience with instead of triggers and if so, what have you seen? Thanks in advance.

    David

    @SQLTentmaker

    “He is no fool who gives what he cannot keep to gain that which he cannot lose” - Jim Elliot

  • Thanks for the nice comment on the article.

    Well, based on what I have seen about INSTEAD OF triggers using them to make views updateable seems to be the most common use.

    As I mention in the discussion I pointed to and Bruce mentions as well, ideally your data manipulation will be done via stored procedures so any logic that might require an INSTEAD OF trigger would be included in the stored procedure.

    If your data is "marked" as deleted why do the developers need to issue a delete? Why can't they code the delete commands for those "objects" to do an update of the bit column?

  • Jack Corbett (2/3/2009)


    If your data is "marked" as deleted why do the developers need to issue a delete? Why can't they code the delete commands for those "objects" to do an update of the bit column?

    They can and they do now. They are looking to streamline their application architecture in the object layer and with that part of it is using updatable views and instead of triggers. Agreed, they could just write the update but for some reason using delete is more streamlined in their coding process. Again, don't necessarily understand it and I will be digging at this.

    My question is mainly what are the costs that outweigh these "benefits" (note the quotes so that I don't get killed here :)). I don't want to try to stand against something based on semantics but rather on performance facts. Know what I mean?

    Thanks gentlemen! Truly appreciate all the feedback so far.

    David

    @SQLTentmaker

    “He is no fool who gives what he cannot keep to gain that which he cannot lose” - Jim Elliot

  • DavidB (2/3/2009)


    Not a fan of triggers in any way.

    With that being said though do you have any experience with instead of triggers and if so, what have you seen? Thanks in advance.

    [font="Verdana"]Reaches into the "Past Horror Stories" mental folder...

    I've done a lot of work in Oracle, and in Oracle triggers are somewhat different to those in SQL Server. Firstly, there's a little more flexibility. Secondly, you need to use them to generate ID fields (no identity.) Even so, I made very sparing use of them. It's almost impossible to maintain or tune code that is intertwined with your table design. The code needs to be a layer above the data, not imbedded within it. (Yes, I have issues with object-oriented databases as well...)

    I worked not too long ago for an organisation that shall remain nameless. When I was employed, it was to be the database architect for the internal development team, working in conjunction with the enterprise architect and the software architect. Or at least, that's what I was told in the interviews.

    The reality was that the enterprise architect... wasn't an enterprise architect. In fact, he wasn't an architect at all. I would describe him as the "enterprise meddler". He insisted on dictacting low-level elements of design for stuff that he knew nothing about (like databases), and yet he couldn't come up with the enterprise level "big picture" of how everything fitted together and where we were going. So keep that in mind, because it explains the next bit.

    All business logic was encoded as triggers within the database. Seriously. All.

    He based this design descition (note, design decision not architectural descision, so why was it being made by the architect?) on the fact that he once worked for a company that manufactured a small piece of software that made use of a database, and that's what they had done.

    We contacted said company, and they informed us that while they did make use of triggers in their database designs in the past, they discovered big issues with them and had since moved on to another design.

    So his whole design philosophy was falatious. So of course, I asked him why he insisted on using triggers. What was the business issue he was trying to solve with them.

    His issue was basically one of safety. He wanted to guarantee that if anyone at all, in any fashion, modified the transaction tables, that the business logic would fire.

    Now me, I would solve that issue with security. Database security is easy enough to set up so that no user connecting to the database would be able to modify the core transaction tables.

    Instead, he insisted on the trigger design as his safety blanket. And as a result, he knee-capped the database performance. Because the triggers weren't written by terribly database-savvy people, all of the logic was row by row, and we had triggers calling stored procedures that generated more data that in turn fired more triggers that called stored procedures that generated more data that...

    You get the picture.

    So this "architecture" was basically insisting that a poor-performing solution was correct.

    I spent some time challenging the precepts of that architecture. I showed that:

  • A trigger isn't actually "guaranteed" to succeed (buggy code, deadlocks, timeouts, etc), and that unless the data manipulation was encapsulated in a transaction and rolled back, it was quite possible to have data manipulation happening without the corresponding business logic completing successfully. So his "guarantee" of the business logic completing wasn't a guarantee at all. I showed evidence of where data manipulation had occured without the trigger logic completing, and yet the data was left in place.
  • Where an outlying transaction was wrapped around the data manipulation, then all subsequent

    database access (including reading from other tables) was done within the scope of that transaction, until the outlying transaction was committed. That meant that any simple data manipulation resulted in crazy locks all over the place! And in fact, we ended up with massive contention due to locking on several core tables.

  • Some trigger/business logic chains where so extensive, they actually caused the originating program to roll back due to timeouts (timeouts were set to 1 minute). This left all sorts of data in all sorts of weird states. ("Relational integrity? Why do we need that?")
  • Evaluation of moving from row-based to set-based triggers showed a massive reduction in locking and contention. Evaluation of using front-end stored procedure logic (gateways to the tables) and doing all of the logic there (if it absolutely had to be in the database), and only creating transactions as necessary, and using database security to ensure that no direct modifications could be made to the database tables showed an almost complete elimination of locking and contention.
  • I did my job and showed the impact of his "architecture." The result? "We can't use an alternative to the design because that's a (pause for effect) change. Change involves risk. Change is therefore not acceptable to the business."

    I no longer work for that company.

    [/font]

  • DavidB (2/3/2009)


    They are looking to streamline their application architecture in the object layer and with that part of it is using updatable views and instead of triggers. Agreed, they could just write the update but for some reason using delete is more streamlined in their coding process.

    [font="Verdana"]Get them to write gateway stored procedures, and do all data manipulation through the stored procedures. You will solve so many issues by doing that.

    Stay away from triggers. Back away, keep your hands in the air...[/font]

  • Bruce W Cassidy (2/3/2009)


    DavidB (2/3/2009)


    Not a fan of triggers in any way.

    With that being said though do you have any experience with instead of triggers and if so, what have you seen? Thanks in advance.

    [font="Verdana"]Reaches into the "Past Horror Stories" mental folder...

    I no longer work for that company.

    [/font]

    Your wisdom is showing! 😛 Thank you for sharing the "horror" story. I can understand the pain and I am hoping to avoid any trigger problems as well. Thanks for the additional ammunition. 😀

    David

    @SQLTentmaker

    “He is no fool who gives what he cannot keep to gain that which he cannot lose” - Jim Elliot

  • Just a quick update but I did do some testing and found that there is additional cost associated with the use of these as expected. Some of it is a bit odd but then again it is a trigger. I will put together a more formal blog post and put the link in this thread when completed.

    Thanks all for the feedback and the stories of "encouragement". 🙂

    David

    @SQLTentmaker

    “He is no fool who gives what he cannot keep to gain that which he cannot lose” - Jim Elliot

  • I'll add to this...

    To me, "INSTEAD OF" triggers always feel like "INSTEAD OF DOING IT CORRECTLY" triggers. They are great for a stop-gap when you need to make a structural change but cannot change an application. So, you can continue to let the application do something the wrong way until it can be modified to properly handle the database schema.

    They should not be used to prop-up lazy or incompetant programmers. Triggers, in general, should be used as a last-resort. They are not inherantly bad, but they add a subtle complexity to things that is often overlooked. They also "hide" in a database and easily get forgotten or missed later.

    If your developers are looking to add an abstraction level for the data layer of your application (often a good idea) you should be looking at either a business object layer in the application, or the entity framework in SQL Server to handle this. Using a series of views and instead of triggers is not a good long-term solution.

  • Here's a thread with an example of an INSTEAD OF trigger, although you could use an AFTER trigger in this situation as well.

    http://www.sqlservercentral.com/Forums/Topic649749-146-1.aspx

    Oh, and I'd enforce this in a Stored Procedure and not allow direct table access anyway.

  • Michael Earl (2/4/2009)


    I'll add to this...

    To me, "INSTEAD OF" triggers always feel like "INSTEAD OF DOING IT CORRECTLY" triggers. They are great for a stop-gap when you need to make a structural change but cannot change an application. So, you can continue to let the application do something the wrong way until it can be modified to properly handle the database schema.

    They should not be used to prop-up lazy or incompetant programmers. Triggers, in general, should be used as a last-resort. They are not inherantly bad, but they add a subtle complexity to things that is often overlooked. They also "hide" in a database and easily get forgotten or missed later.

    If your developers are looking to add an abstraction level for the data layer of your application (often a good idea) you should be looking at either a business object layer in the application, or the entity framework in SQL Server to handle this. Using a series of views and instead of triggers is not a good long-term solution.

    Michael - Thanks for the feedback. I believe we have been able to kill this just by looking at some of the caveats to the design that was being considered and in the area of the additional processing that is required as a result of using the instead of triggers. Indeed this consideration was something that was being done more out of laziness than anything else. Consistency was part of it but that, as you mentioned, can be done through an abstraction layer which oddly we are using, albeit homegrown at this point.

    Jack - Thank you too for the link, etc!

    David

    @SQLTentmaker

    “He is no fool who gives what he cannot keep to gain that which he cannot lose” - Jim Elliot

  • Viewing 14 posts - 1 through 13 (of 13 total)

    You must be logged in to reply to this topic. Login to reply