April 12, 2005 at 7:49 pm
I'm having a similar problem. Much smaller tables, about 6 columns, 140000 rows. Looking at the query plan, the read from the inserted table is by far the most expensive operation, even though there is only a single row in it. Other operations that are kicked off by the trigger, which should be much more expensive (READTEXT / WRITETEXT along with finding the block of text to replace) take a significant amount less time. This makes absolutely no sense. I've even tried to eliminate any joins at all (setting the values into params and then using the params to find the data that I need in other tables. The results are still pathetic. I am very curious if you ever found a solution for this, or maybe after 3 months since the last post, somebody has come across some new information about what is going on with the inserted table being so inefficient to read.
April 13, 2005 at 3:17 am
No Tim , I never did find a solution to the problem and even posting questions on the MS newsgroups didn't yield any explanations as to why accessing the Inserted / Deleted tables takes so long. As a result we have steered clear from using triggers in SQL 2000 and just use 'pseudo trigger' code within stored procedures instead. Not ideal as it does not guarantee the same kind of integrity that triggers do (especially if an update is done outside of the stored procedures) but performance is the most important factor in our environment so the triggers had to go. Maybe SQL 2005 will address and resolve the issue…we can only hope…
Viewing 2 posts - 16 through 16 (of 16 total)
You must be logged in to reply to this topic. Login to reply