April 25, 2012 at 6:12 am
Hello,
While spending some time practicing the use of triggers I came across an odd
situation while testing the performance and debugging one such trigger.
I have setup an "AFTER" trigger on both UPDATE and INSERT that uses the
INSERTED and DELETED pseudo tables to help carry out the required task.
To check that these tables contained the number of rows I expected I added a
'SELECT COUNT(*) FROM INSERTED' and likewise another using the DELETED
table from within the trigger itself with timed results directed into a
debug table I created.
However, what baffles me is that although these pseudo tables contain the
number of rows I had expected, the time taken to count them is very slow.
In the scenario I am testing the trigger in ALL 33 million rows of the
triggers base table have been updated which then actions the trigger.
Because I am UPDATing the base table, both the INSERTED and DELETED pseudo
tables contain the same count of rows which equate to the total count of
rows in the base table.
But, the SELECT COUNT(*) FROM INSERTED as well as the same on the DELETED
table take around 8 or so minutes too complete when executed within the
trigger. This is as opposed to performing the same function on the base
table itself that only takes 14 seconds.
Can anyone shed any light why such a performance discrepancy should exist?
Regards
Steve
April 25, 2012 at 7:28 am
The inserted and deleted tables behave a little differently than regular tables. No indexes, for one thing. No stats. So querying them can be a performance issue. That's normal.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
April 25, 2012 at 7:44 am
I guess my question would be why are you doing this in a trigger. If you're worried about the number of records you are inserting or updating, why not perform the counts and checks before performing the insert/update?
For better, quicker answers, click on the following...
http://www.sqlservercentral.com/articles/Best+Practices/61537/
For better answers on performance questions, click on the following...
http://www.sqlservercentral.com/articles/SQLServerCentral/66909/
April 25, 2012 at 11:17 am
You can also use @@ROWCOUNT as the FIRST statement in the trigger to determine the # of rows affected by the calling action.
Jack Corbett
Consultant - Straight Path Solutions
Check out these links on how to get faster and more accurate answers:
Forum Etiquette: How to post data/code on a forum to get the best help
Need an Answer? Actually, No ... You Need a Question
April 25, 2012 at 11:26 am
GSquared (4/25/2012)
The inserted and deleted tables behave a little differently than regular tables. No indexes, for one thing. No stats. So querying them can be a performance issue. That's normal.
They're also materialised out of the row version store and can be spilt to disk in TempDB. So if it's large and/or tempDB's under strain already, can get really bad.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
April 25, 2012 at 11:42 am
Mike01 (4/25/2012)
I guess my question would be why are you doing this in a trigger. If you're worried about the number of records you are inserting or updating, why not perform the counts and checks before performing the insert/update?
The reason I did this was to try and determine from within the trigger code why its performance was poor. I was attempting to verify that what I thought was being actioned via the INSERTED/DELETED tables in the trigger code was in fact being done so.
Just a temporary debugging measure and nothing else.
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply