March 16, 2015 at 10:31 pm
Comments posted to this topic are about the item The Right and Wrong of T-SQL DML TRIGGERs (SQL Spackle)
My thought question: Have you ever been told that your query runs too fast?
My advice:
INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.
Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
Since random numbers are too important to be left to chance, let's generate some![/url]
Learn to understand recursive CTEs by example.[/url]
[url url=http://www.sqlservercentral.com/articles/St
March 17, 2015 at 6:47 am
Nice article Dan. I understand the Pain you mention of RBAR triggers all to well. We have an OLTP database that uses 7 of them to trigger EDI messaging. There is even language in the triggers that disable/enable the other triggers to change the order they fire in.
March 17, 2015 at 7:51 am
Good article however this point appears to not be true
• Normally if you execute a standalone (“atomic”) INSERT, UPDATE, DELETE or MERGE statement you don’t really need to put it into a tranaction (you can). However if any of those statements causes a trigger to fire, it can no longer be considered atomic and you should put it into a transaction, and use the proper error handling and rollback mechanism in the event that an error occurs during the action of the trigger.
All triggers are included in the implicit transaction for the DML operation so are still atomic and can be considered as such. No need for an external transaction in the case you describe.
The use of triggers is transparent to the consumer, you can't and shouldn't know in advance when you update a table whether it is going to call 0 triggers or 100 triggers. Your transaction strategy should be the same as a consumer as it would be in either case.
March 17, 2015 at 9:28 am
do people really write code where they insert data like that with the status and the code not part of the insert operation and you have to update it as part of a trigger or a job?
March 17, 2015 at 10:13 am
Very good article. I would like to point out my preference for INSTEAD OF triggers on views. The view could be as simple as SELECT * FROM
. I consider a table the master of all data and a poorly written trigger on a table can do a lot of damage. While a poorly written INSTEAD OF trigger on a view is just as bad, it makes it easier to correct data in the table without worrying about triggering other calculations, as the trigger is on the view, not on the table.
March 17, 2015 at 4:23 pm
alen teplitsky (3/17/2015)
do people really write code where they insert data like that with the status and the code not part of the insert operation and you have to update it as part of a trigger or a job?
Yes. There are many such cases. For Type 2 and "pure" Type 6 slowly changing dimensions, it can be the "Bread'n'Butter" of the trigger to do such things (for example).
--Jeff Moden
Change is inevitable... Change for the better is not.
March 17, 2015 at 5:45 pm
PHYData DBA (3/17/2015)
Nice article Dan. I understand the Pain you mention of RBAR triggers all to well. We have an OLTP database that uses 7 of them to trigger EDI messaging. There is even language in the triggers that disable/enable the other triggers to change the order they fire in.
I told Dan you said that and he said thanks! 😛
My thought question: Have you ever been told that your query runs too fast?
My advice:
INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.
Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
Since random numbers are too important to be left to chance, let's generate some![/url]
Learn to understand recursive CTEs by example.[/url]
[url url=http://www.sqlservercentral.com/articles/St
March 17, 2015 at 5:49 pm
brett 41274 (3/17/2015)
Good article however this point appears to not be true• Normally if you execute a standalone (“atomic”) INSERT, UPDATE, DELETE or MERGE statement you don’t really need to put it into a tranaction (you can). However if any of those statements causes a trigger to fire, it can no longer be considered atomic and you should put it into a transaction, and use the proper error handling and rollback mechanism in the event that an error occurs during the action of the trigger.
All triggers are included in the implicit transaction for the DML operation so are still atomic and can be considered as such. No need for an external transaction in the case you describe.
The use of triggers is transparent to the consumer, you can't and shouldn't know in advance when you update a table whether it is going to call 0 triggers or 100 triggers. Your transaction strategy should be the same as a consumer as it would be in either case.
The quoted article does seem to imply that you are correct. Still, I might prefer to err on the safe side and include it in a transaction with error handling anyway (can't hurt).
My thought question: Have you ever been told that your query runs too fast?
My advice:
INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.
Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
Since random numbers are too important to be left to chance, let's generate some![/url]
Learn to understand recursive CTEs by example.[/url]
[url url=http://www.sqlservercentral.com/articles/St
March 17, 2015 at 5:52 pm
alen teplitsky (3/17/2015)
do people really write code where they insert data like that with the status and the code not part of the insert operation and you have to update it as part of a trigger or a job?
Given that the INSERT statement allows you to specify the columns (and it is a best practice to always do so), it is nearly a certainty that in some cases there will be people that omit columns. Populating each of the columns in my example within the INSERT is probably enough of a burden that it is easier to use a trigger.
My thought question: Have you ever been told that your query runs too fast?
My advice:
INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.
Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
Since random numbers are too important to be left to chance, let's generate some![/url]
Learn to understand recursive CTEs by example.[/url]
[url url=http://www.sqlservercentral.com/articles/St
March 17, 2015 at 5:55 pm
N_Muller (3/17/2015)
Very good article. I would like to point out my preference for INSTEAD OF triggers on views. The view could be as simple as SELECT * FROM. I consider a table the master of all data and a poorly written trigger on a table can do a lot of damage. While a poorly written INSTEAD OF trigger on a view is just as bad, it makes it easier to correct data in the table without worrying about triggering other calculations, as the trigger is on the view, not on the table.
I personally don't like INDEXes or TRIGGERs defined on VIEWs. They may be OK if the VIEW is not schema-bound, and there's no DROP/CREATE precedence to objects being changed that are referenced by the VIEW. In this case, it is way too easy to simply forget to redefine them on the VIEW after the DROP/CREATE.
As in all things SQL, it depends. If you need to use these features for some reason and simply can't avoid it, there are ways to mitigate the potential for this occurrence.
My thought question: Have you ever been told that your query runs too fast?
My advice:
INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.
Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
Since random numbers are too important to be left to chance, let's generate some![/url]
Learn to understand recursive CTEs by example.[/url]
[url url=http://www.sqlservercentral.com/articles/St
March 17, 2015 at 7:47 pm
dwain.c (3/17/2015)
N_Muller (3/17/2015)
Very good article. I would like to point out my preference for INSTEAD OF triggers on views. The view could be as simple as SELECT * FROM. I consider a table the master of all data and a poorly written trigger on a table can do a lot of damage. While a poorly written INSTEAD OF trigger on a view is just as bad, it makes it easier to correct data in the table without worrying about triggering other calculations, as the trigger is on the view, not on the table.
I personally don't like INDEXes or TRIGGERs defined on VIEWs. They may be OK if the VIEW is not schema-bound, and there's no DROP/CREATE precedence to objects being changed that are referenced by the VIEW. In this case, it is way too easy to simply forget to redefine them on the VIEW after the DROP/CREATE.
As in all things SQL, it depends. If you need to use these features for some reason and simply can't avoid it, there are ways to mitigate the potential for this occurrence.
I seldom create schema-bound views, so my views don't have indexes. My point was that when I need to create a trigger on a table, I much prefer creating a view and an instead-of trigger on the view, instead of (no pun intended) creating triggers on the tables.
March 18, 2015 at 4:09 am
Good article. It highlights the most common mistake made in triggers by developers coming to sql server from other databases which usually have row level triggers.
However, the example is strange. When you implement a history, the main table triggers insert old values into the history table. The opposite, where you Insert into history table, where the trigger updates the main table, is IMHO unusual and possibly wrong.
If the design is correct and history in the name is misleading, shouldn't it update Total_Pkgs?
In sqlserver I don't miss row level triggers, but I do miss before triggers for two reasons:
1. after trigger can raise an exception to prevent an operation, but client can still (auto)commit transaction with wrong data
2. insert of triggers are a pain, but can solve the problem #1
March 18, 2015 at 5:40 pm
Robert-378556 (3/18/2015)
Good article. It highlights the most common mistake made in triggers by developers coming to sql server from other databases which usually have row level triggers.However, the example is strange. When you implement a history, the main table triggers insert old values into the history table. The opposite, where you Insert into history table, where the trigger updates the main table, is IMHO unusual and possibly wrong.
If the design is correct and history in the name is misleading, shouldn't it update Total_Pkgs?
In sqlserver I don't miss row level triggers, but I do miss before triggers for two reasons:
1. after trigger can raise an exception to prevent an operation, but client can still (auto)commit transaction with wrong data
2. insert of triggers are a pain, but can solve the problem #1
Wrong? I suppose that is a matter of perspective. I can assure you that the example, while significantly simplified over the real-life scenario, is entirely accurate. History comes in independent of the base (Shipment) information and relates to a chain of historical events for each Shipment transaction.
Allow me to elaborate. Of the approximately 50 non-report generating forms, about 50% of those represent an automated event in a shipment's history. So when you fill out those forms for a Shipment, they each generate a unique event, which may be for a package or for a consignment. Many of these events come from bar-code scanning of a shipment or package. History also comes with it a whole lot more information that is not recorded at the Shipment level (about the event). Many of the events are not triggered by an activity on a form (e.g., you can put an "OPS" comment into a shipment at any time to indicate some special condition).
Since there are many sources that generate events, and relatively few of these impact any data on the Shipment record (last status code and date/time being the exceptions), the act of creating the history fires the trigger. Furthermore, since the history drives a lot of the reporting (many reads off that table) the initial status is recorded in a staging table, and only recorded as a "last status" for the Shipment when "scooped" up in large quantities from the staging table and inserted in bulk to the history table.
Some other notations:
- The original design limit was to support 500 history events per minute, although using some clever load balancing techniques it will support much higher during peak periods (system operates 24/7).
- Events for a consignment can be recorded prior to the Shipment existing, even though there is a FOREIGN KEY relationship (parent/child) from Shipment to Shipment History. This is allowed because the first notification that a shipment has arrived (a scan) may be the box appearing on the loading dock.
- Some of the history is captured through mobile devices.
- One of the reports that must use the full history table is something that's called a "Versus Report." The idea is that some of the individual events should generally occur in a specific sequence. This report is designed to identify cases of events missing or out of sequence. That's a rather intensive event given that the history table easily holds upwards of 50M rows.
- Edit: Almost forgot. The data in history is highly time sensitive, meaning that reporting needs to be up to the minute. For example, the status POD (proof of delivery) is usually monitored quite closely (especially towards the end of the business day) due to delivery commitments that could result in a free shipment to a customer if they are not met.
My thought question: Have you ever been told that your query runs too fast?
My advice:
INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.
Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
Since random numbers are too important to be left to chance, let's generate some![/url]
Learn to understand recursive CTEs by example.[/url]
[url url=http://www.sqlservercentral.com/articles/St
March 19, 2015 at 2:38 pm
dwain.c (3/17/2015)
brett 41274 (3/17/2015)
Good article however this point appears to not be true• Normally if you execute a standalone (“atomic”) INSERT, UPDATE, DELETE or MERGE statement you don’t really need to put it into a tranaction (you can). However if any of those statements causes a trigger to fire, it can no longer be considered atomic and you should put it into a transaction, and use the proper error handling and rollback mechanism in the event that an error occurs during the action of the trigger.
All triggers are included in the implicit transaction for the DML operation so are still atomic and can be considered as such. No need for an external transaction in the case you describe.
The use of triggers is transparent to the consumer, you can't and shouldn't know in advance when you update a table whether it is going to call 0 triggers or 100 triggers. Your transaction strategy should be the same as a consumer as it would be in either case.
The quoted article does seem to imply that you are correct. Still, I might prefer to err on the safe side and include it in a transaction with error handling anyway (can't hurt).
I agree with Brett on all 3 points:
Two points regarding the query to find triggers:
SELECT COALESCE(OBJECT_NAME(st.[parent_id]), DB_NAME()) AS [ParentObjectName],
CASE
WHEN CONVERT(BIT, OBJECTPROPERTY(st.[parent_id], 'IsUserTable')) = 1 THEN 'Table'
WHEN CONVERT(BIT, OBJECTPROPERTY(st.[parent_id], 'IsView')) = 1 THEN 'View'
ELSE 'Database'
END AS [ParentObjectType],
*
FROM sys.triggers st;
Take care,
Solomon..
SQL# — https://SQLsharp.com/ ( SQLCLR library ofover 340 Functions and Procedures)
Sql Quantum Lift — https://SqlQuantumLift.com/ ( company )
Sql Quantum Leap — https://SqlQuantumLeap.com/ ( blog )
Info sites — Collations • Module Signing • SQLCLR
March 19, 2015 at 6:21 pm
Solomon Rutzky (3/19/2015)
I agree with Brett on all 3 points:
- this is a good article
- it is not true that trigger execution is separate from the DML statement that fired it (I can confirm that there is an implicit trigger; this is also how one is able to cancel the DML--or even DDL--operation by issuing a ROLLBACK from within the trigger)
- transaction strategy should not depend on the existence of triggers. Though to Dwain's point, I am not sure that it hurts, either, to explicitly wrap the DML in a transaction
Two points regarding the query to find triggers:
- That query does not search across all databases. The "all_" system views (all_objects, all_sql_modules, all_views, etc) just bring back the system meta-data (i.e. object_id's that are negative), but still within a particular database.
- Rather than using sys.objects and filtering on "TR", you can select from sys.triggers which is pre-filtered, includes trigger-specific info (e.g. is_disabled, is_instead_of_trigger, etc), and includes database-level DDL triggers (though I realize that wasn't the focus of the article). For example:
SELECT COALESCE(OBJECT_NAME(st.[parent_id]), DB_NAME()) AS [ParentObjectName],
CASE
WHEN CONVERT(BIT, OBJECTPROPERTY(st.[parent_id], 'IsUserTable')) = 1 THEN 'Table'
WHEN CONVERT(BIT, OBJECTPROPERTY(st.[parent_id], 'IsView')) = 1 THEN 'View'
ELSE 'Database'
END AS [ParentObjectType],
*
FROM sys.triggers st;
Take care,
Solomon..
Thanks for saying the article is good and for clarifying on those points. Especially the one about my script only querying the current db - I should have checked that before writing it (I was thinking to but thought my initial assumption was right - bad me!).
Querying out of the sql_triggers table is pretty sweet. Hadn't used that one before. I'll need to keep that bad boy around, as I had wondered how to pull out db triggers. Just never was so pressing as to make me investigate.
My thought question: Have you ever been told that your query runs too fast?
My advice:
INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.
Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
Since random numbers are too important to be left to chance, let's generate some![/url]
Learn to understand recursive CTEs by example.[/url]
[url url=http://www.sqlservercentral.com/articles/St
Viewing 15 posts - 1 through 15 (of 19 total)
You must be logged in to reply to this topic. Login to reply