March 13, 2007 at 10:57 am
Hello,
One portion of a Trigger does an insert to a different destination table if the key in the destination table does not already exist. The code looks like this;
CREATE TRIGGER [dbo].[tr_DailyAccountsDownloadHistory] ON [dbo].[DailyAccountsDownload]
FOR INSERT, UPDATE
AS
--Insert accounts that are new to DailyAccountsDownload, into Tracking;
BEGIN
INSERT DailyAccountsDownload_Tracking
(MMS, AcctCode, Zip, TC, SlsRep, TieCode)
SELECT
Id.MMS#, RTRIM(ISNULL(Id.AcctCode,'')) AS AcctCode, ISNULL(Id.Zip,'') AS Zip, ISNULL(Id.TC,'') AS TC, ISNULL(Id.SlsRep,'') AS SlsRep, ISNULL(Id.TieCode,'') AS TieCode
FROM Inserted Id
LEFT OUTER JOIN DailyAccountsDownload_Tracking hs
ON Id.MMS# = hs.MMS
WHERE
hs.MMS IS NULL
END
......(rest of Trigger)
Before this Trigger was fired, DailyAccountsDownload was Truncated. Then, DailyAccountsDownload was populated with over 200 thousand records. DailyAccountsDownload_Tracking only got 25 thousand of those records.
Why would this happen?
Thank you for your help!
CSDunn
March 13, 2007 at 11:06 am
You truncated DailyAccountsDownload.
But you didn't truncate DailyAccountsDownload_Tracking, right ?
So, presumably your LEFT JOIN on DailyAccountsDownload_Tracking only evaluated "WHERE hs.MMS IS NULL" to TRUE on 25,000 of the records. Apparently the other 175K records found a match on the MMS column value, so the join was satisfied and no INSERT occurred.
March 13, 2007 at 12:05 pm
Sorry, I forgot to mention that DailyAccountsDownload_Tracking was also empty. So both tables were empty at the time that DailyAccountsDownload was repopulated.
That's what makes it a mystery.
March 13, 2007 at 12:12 pm
Is the insert of 200K records going in as 1 batch, so that all 200K records are in the 'inserted' virtual table ?
Or are they individual inserts, 1 record at a time ?
Failing that as an explanation, does the DailyAccountsDownload_Tracking table have any indexes on it with the IGNORE_DUP_KEY option set on the index ?
March 13, 2007 at 1:09 pm
DailyAccountsDownload_Tracking has a nonclustered primary key on MMS and TrackingDate. The MMS values from DailyAccountsDownload are all unique. The insert occurs in a DTS package, I'll have to go check.....
March 13, 2007 at 1:13 pm
>>The insert occurs in a DTS package, I'll have to go check.....
I would guess it's auto-committing every 1000 rows (the default in DTS), which is why you're seeing that behaviour.
March 13, 2007 at 1:32 pm
On the 'Options' tab of the transform where the insert takes place in the DTS package, 'Use Fast Load' is checked, and 'Insert Batch Size' = 0. According to BOL, these setting should permit loading all of the data at once instead of in batches.
Would that matter?
March 13, 2007 at 1:40 pm
What sort of DTS task is this ? Data Transformation ? Data-driven Query ?
If in doubt, turn on SQL Profiler and see what SQL statements are being sent to the server. The behaviour you're seeing indicates this is not happening as 1 atomic insert transaction of 200K records.
If your solution really depends on this happening as a true set-based SQL operation, then just use DTS for what it's best at - pull the necessary data into a staging table via DTS, and process everything else downstream from that as set-based T-SQL.
March 13, 2007 at 1:50 pm
The Insert is part of a Transform Data Task. Would it do any good to use 'Begin Transaction' with the Insert statement?
Thanks again.
March 13, 2007 at 1:59 pm
I'm just finding out that the 'Use Fast Load' option in the Transform Data Task will cause Triggers to be bypassed. I wonder how much slower the Insert will run if I 'turn off' Use Fast Load?
Viewing 10 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply