September 4, 2008 at 11:27 am
Hi,
I have a mysterious problem.
A few rows (like 1 in 10.000) in a table updated by a trigger are not being actually updated.
My cenario:
Big, complex and old (first version was from 90's) database running in MS SQL Server 2005.
I have a new table called tbSPECIAL_CUSTOMERS filled with data from tbSPECIAL (about 200.000 rows) and tbCUSTOMERS (about 1.000.000 rows).
the way tbSPECIAL and tbCUSTOMERS joins is a sort of "if-then-else" (not straightforward).
I created triggers in tbSPECIAL and tbCUSTOMERS. Any insert, update, delete in these tables can (but is not mandatory to, dpeended on the join logic) reflect in tbSPECIAL_CUSTOMERS (about 400.000 rows).
When any user interacts with the databse all works fine.
The problem is 2 to 3 times/week a batch runs updating thousands of rows in a dozen tables, tbCUSTOMERS and tbSPECIAL included.
After this i check tbSPECIAL_CUTOMERS to find 2 to 22 rows not updated.
To be sure the update triggers are working i just manage to:
update c set c.field = c.field from tbCUSTOMERS c
update s set s.field = s.field from tbSPECIAL s
and all errors vanish!
its not a big issue for my company since at general there are only a dozen in a million customers outdated (and in general the outdated data is not critical like a ZIP) and its easily to fix.
But its driving me nuts! I checked for locks, nested triggers, logs, etc.
Can Anyone give me a tip?
September 4, 2008 at 2:57 pm
Any "bcp" or "bulk insert" activity ? these do *not* fire triggers unless explicitly told to 😉
* Noel
September 4, 2008 at 3:13 pm
Nop,
Its a .NET application running as a service and scaning a input folder for files.
Its reads any .TXT files in the input folder to populate a lot of "stage" tables.
After all stage tables are populated its fires dozens of SPs to insert, update, delete in a a lot of tables.
Unfortunatelly all stage tables are cleared after the job is done.
I managed to build .TXT files only with the missing data and all worked fine. Im actually working in the app to implement any kind of reliable DB access log.
At general the .TXT files arrive at the input folder at night when the database are in low usage.
The strange is the whole thing works fine to 99,99% of the data and avoid to update just 0.01% of the updatable rows.
September 5, 2008 at 9:21 am
Am I missing it, or did you not post the trigger code? That's what I'd start with.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
September 5, 2008 at 9:30 am
yeah i think we need to see the trigger code itself...it might not be designed to handle multiple rows on insert or something basic like that.
show us the code so we can offer suggestions or at least a direction to investigate.
Lowell
September 5, 2008 at 11:31 am
Thank for the posts guys.
I managed to implement a log and think i found the problem: timeouts is another trigger at the same table.
Now i ill try to implement a "if it raised a timeout take a coffe break and re-run de call to the SP" and a "hold your breath while there are exclusive locks on the table" in the application.
The real thing is big, nasty and all objects are named in not english language
anyway, fell free to help me improve my triggers skills =)
If u want a trigger to work on enjoy this "template"
CREATE TRIGGER dbo.[tr_tbCUSTOMER]
ON dbo.[tbCUSTOMER]
AFTER INSERT, UPDATE, DELETE
as
BEGIN
set nocount on
/* find rows to insert */
select cli.field1,cli.field2,cli.field3
into #CliI
from INSERTED cli
/* where a lot of field validation */
select usem.field1, usem.field2, usem.field3
into #UsemI
from dbo.[tbSPECIAL] usem
join INSERTED cli on dbo.cli.field1 = dbo.[udf_maketypecast](usem.field3)
/* where a lot of field validation */
/* find rows to delete */
select cli.field1,cli.field2,cli.field3
into #CliD
from DELETED cli
/* where a lot of field validation */
select usem.field1, usem.field2, usem.field3
into #UsemD
from dbo.[tbSPECIAL] usem
join DELETED cli on dbo.cli.field1 = dbo.[udf_maketypecast](usem.field3)
/* where a lot of field validation */
/* each select uses a diferent set of fields to evaluate the join clause (yes, it´s a pain)*/
select DISTINCT
c.field1, c.field2, c.field3
, u.field1, u.field2, u.field3
, 1 as flag1
into #CliSpecialA
from #Cli c
join #Usem u on u.field1 = c.field3
/*where (bunch of field validations)*/
select DISTINCT
c.field1, c.field2, c.field3
, u.field1, u.field2, u.field3
, 1 as flag1
into #CliSpecialB
from #Cli c
join #Usem u on u.field3 = c.field1
/*where (bunch of field validations)*/
/* put some more select DISTINCT here to pulate all temp tabs used to insert */
/* now populate some more temp tables for the deleted rows */
/* now its time to delete the outdated data (update & delete) */
delete crt
from dbo.[tbCUSTOMSPECIAL] crt
join #cliDSpecialA a on a.fields = crt.fields
delete crt
from dbo.[tbCUSTOMSPECIAL] crt
join #cliDSpecialB b on b.fields = crt.fields
delete crt
from dbo.[tbCUSTOMSPECIAL] crt
join #cliDSpecialC c on c.fields = crt.fields
/* etc */
/* now its time to insert the rows using the new data (insert & update) */
insert into dbo.[tbCUSTOM_SPECIAL]
select * from #cliISpecialA
union select * from #cliISpecialB
union select * from #cliISpecialC
union select * from #cliISpecialD
union select * from #cliISpecialE
union select * from #cliISpecialF
set nocount off
END
September 5, 2008 at 2:51 pm
The first thing I noticed was join statements that use UDFs. Is that actually necessary?
Since I can't tell what the UDFs do, I can't tell if they are needed or not. But using UDFs in joins (or where clauses) that way is pretty much a sure route to killing your performance.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply